Oct 2 19:43:04.758403 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:43:04.758427 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:43:04.758435 kernel: efi: EFI v2.70 by EDK II Oct 2 19:43:04.758441 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:43:04.758446 kernel: random: crng init done Oct 2 19:43:04.758451 kernel: ACPI: Early table checksum verification disabled Oct 2 19:43:04.758457 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:43:04.758464 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:43:04.758470 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758475 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758481 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758486 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758492 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758497 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758505 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758511 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758517 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:43:04.758534 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:43:04.758540 kernel: NUMA: Failed to initialise from firmware Oct 2 19:43:04.758546 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:04.758552 kernel: NUMA: NODE_DATA [mem 0xdcb0c900-0xdcb11fff] Oct 2 19:43:04.758557 kernel: Zone ranges: Oct 2 19:43:04.758563 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:04.758570 kernel: DMA32 empty Oct 2 19:43:04.758576 kernel: Normal empty Oct 2 19:43:04.758581 kernel: Movable zone start for each node Oct 2 19:43:04.758587 kernel: Early memory node ranges Oct 2 19:43:04.758593 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:43:04.758598 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:43:04.758604 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:43:04.758610 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:43:04.758615 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:43:04.758621 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:43:04.758627 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:43:04.758633 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:43:04.758639 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:43:04.758645 kernel: psci: probing for conduit method from ACPI. Oct 2 19:43:04.758651 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:43:04.758657 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:43:04.758662 kernel: psci: Trusted OS migration not required Oct 2 19:43:04.758671 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:43:04.758677 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:43:04.758684 kernel: ACPI: SRAT not present Oct 2 19:43:04.758691 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:43:04.758697 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:43:04.758703 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:43:04.758710 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:43:04.758716 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:43:04.758722 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:43:04.758728 kernel: CPU features: detected: Spectre-v4 Oct 2 19:43:04.758734 kernel: CPU features: detected: Spectre-BHB Oct 2 19:43:04.758741 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:43:04.758747 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:43:04.758754 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:43:04.758760 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:43:04.758766 kernel: Policy zone: DMA Oct 2 19:43:04.758773 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:43:04.758779 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:43:04.758786 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:43:04.758792 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:43:04.758798 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:43:04.758804 kernel: Memory: 2459284K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113004K reserved, 0K cma-reserved) Oct 2 19:43:04.758812 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:43:04.758818 kernel: trace event string verifier disabled Oct 2 19:43:04.758824 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:43:04.758831 kernel: rcu: RCU event tracing is enabled. Oct 2 19:43:04.758837 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:43:04.758843 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:43:04.758849 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:43:04.758856 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:43:04.758862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:43:04.758868 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:43:04.758874 kernel: GICv3: 256 SPIs implemented Oct 2 19:43:04.758881 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:43:04.758888 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:43:04.758893 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:43:04.758900 kernel: GICv3: 16 PPIs implemented Oct 2 19:43:04.758906 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:43:04.758912 kernel: ACPI: SRAT not present Oct 2 19:43:04.758918 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:43:04.758924 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:43:04.758930 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:43:04.758937 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:43:04.758943 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:43:04.758949 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:04.758957 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:43:04.758963 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:43:04.758969 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:43:04.758976 kernel: arm-pv: using stolen time PV Oct 2 19:43:04.758982 kernel: Console: colour dummy device 80x25 Oct 2 19:43:04.758989 kernel: ACPI: Core revision 20210730 Oct 2 19:43:04.758995 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:43:04.759002 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:43:04.759008 kernel: LSM: Security Framework initializing Oct 2 19:43:04.759014 kernel: SELinux: Initializing. Oct 2 19:43:04.759022 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:43:04.759028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:43:04.759034 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:43:04.759041 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:43:04.759047 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:43:04.759053 kernel: Remapping and enabling EFI services. Oct 2 19:43:04.759059 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:43:04.759087 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:43:04.759094 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:43:04.759110 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:43:04.759116 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:04.759122 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:43:04.759129 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:43:04.759135 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:43:04.759142 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:43:04.759148 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:04.759154 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:43:04.759161 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:43:04.759167 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:43:04.759175 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:43:04.759181 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:43:04.759187 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:43:04.759193 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:43:04.759204 kernel: SMP: Total of 4 processors activated. Oct 2 19:43:04.759212 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:43:04.759219 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:43:04.759226 kernel: CPU features: detected: Common not Private translations Oct 2 19:43:04.759232 kernel: CPU features: detected: CRC32 instructions Oct 2 19:43:04.759239 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:43:04.759246 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:43:04.759252 kernel: CPU features: detected: Privileged Access Never Oct 2 19:43:04.759260 kernel: CPU features: detected: RAS Extension Support Oct 2 19:43:04.759267 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:43:04.759274 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:43:04.759280 kernel: alternatives: patching kernel code Oct 2 19:43:04.759287 kernel: devtmpfs: initialized Oct 2 19:43:04.759295 kernel: KASLR enabled Oct 2 19:43:04.759302 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:43:04.759309 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:43:04.759315 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:43:04.759322 kernel: SMBIOS 3.0.0 present. Oct 2 19:43:04.759329 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:43:04.759336 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:43:04.759342 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:43:04.759349 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:43:04.759358 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:43:04.759364 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:43:04.759371 kernel: audit: type=2000 audit(0.048:1): state=initialized audit_enabled=0 res=1 Oct 2 19:43:04.759377 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:43:04.759384 kernel: cpuidle: using governor menu Oct 2 19:43:04.759391 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:43:04.759397 kernel: ASID allocator initialised with 32768 entries Oct 2 19:43:04.759404 kernel: ACPI: bus type PCI registered Oct 2 19:43:04.759410 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:43:04.759418 kernel: Serial: AMBA PL011 UART driver Oct 2 19:43:04.759711 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:43:04.759719 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:43:04.759726 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:43:04.759733 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:43:04.759740 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:43:04.759746 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:43:04.759753 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:43:04.759760 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:43:04.759772 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:43:04.759778 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:43:04.759785 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:43:04.759791 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:43:04.759798 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:43:04.759875 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:43:04.759903 kernel: ACPI: Interpreter enabled Oct 2 19:43:04.759912 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:43:04.759919 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:43:04.759930 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:43:04.759936 kernel: printk: console [ttyAMA0] enabled Oct 2 19:43:04.759943 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:43:04.762922 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:43:04.762997 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:43:04.763056 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:43:04.763126 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:43:04.763189 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:43:04.763198 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:43:04.763205 kernel: PCI host bridge to bus 0000:00 Oct 2 19:43:04.763274 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:43:04.763327 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:43:04.763379 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:43:04.763431 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:43:04.763504 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:43:04.763616 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:43:04.763680 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:43:04.763740 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:43:04.763801 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:43:04.763861 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:43:04.763923 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:43:04.763987 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:43:04.764042 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:43:04.764101 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:43:04.764159 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:43:04.764168 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:43:04.764175 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:43:04.764182 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:43:04.764189 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:43:04.764198 kernel: iommu: Default domain type: Translated Oct 2 19:43:04.764205 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:43:04.764211 kernel: vgaarb: loaded Oct 2 19:43:04.764218 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:43:04.764225 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:43:04.764232 kernel: PTP clock support registered Oct 2 19:43:04.764239 kernel: Registered efivars operations Oct 2 19:43:04.764245 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:43:04.764252 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:43:04.764261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:43:04.764268 kernel: pnp: PnP ACPI init Oct 2 19:43:04.764334 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:43:04.764344 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:43:04.764351 kernel: NET: Registered PF_INET protocol family Oct 2 19:43:04.764358 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:43:04.764365 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:43:04.764372 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:43:04.764380 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:43:04.764388 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:43:04.764394 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:43:04.764401 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:43:04.764408 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:43:04.764415 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:43:04.764421 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:43:04.764428 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:43:04.764435 kernel: kvm [1]: HYP mode not available Oct 2 19:43:04.764443 kernel: Initialise system trusted keyrings Oct 2 19:43:04.764449 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:43:04.764456 kernel: Key type asymmetric registered Oct 2 19:43:04.764463 kernel: Asymmetric key parser 'x509' registered Oct 2 19:43:04.764470 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:43:04.764477 kernel: io scheduler mq-deadline registered Oct 2 19:43:04.764483 kernel: io scheduler kyber registered Oct 2 19:43:04.764490 kernel: io scheduler bfq registered Oct 2 19:43:04.764497 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:43:04.764505 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:43:04.764512 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:43:04.764587 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:43:04.764597 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:43:04.764603 kernel: thunder_xcv, ver 1.0 Oct 2 19:43:04.764610 kernel: thunder_bgx, ver 1.0 Oct 2 19:43:04.764617 kernel: nicpf, ver 1.0 Oct 2 19:43:04.764623 kernel: nicvf, ver 1.0 Oct 2 19:43:04.764793 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:43:04.764865 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:43:04 UTC (1696275784) Oct 2 19:43:04.764874 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:43:04.764881 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:43:04.764888 kernel: Segment Routing with IPv6 Oct 2 19:43:04.764923 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:43:04.764930 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:43:04.764937 kernel: Key type dns_resolver registered Oct 2 19:43:04.764944 kernel: registered taskstats version 1 Oct 2 19:43:04.764953 kernel: Loading compiled-in X.509 certificates Oct 2 19:43:04.764960 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:43:04.764967 kernel: Key type .fscrypt registered Oct 2 19:43:04.764974 kernel: Key type fscrypt-provisioning registered Oct 2 19:43:04.764981 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:43:04.764988 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:43:04.764994 kernel: ima: No architecture policies found Oct 2 19:43:04.765001 kernel: Freeing unused kernel memory: 34560K Oct 2 19:43:04.765008 kernel: Run /init as init process Oct 2 19:43:04.765016 kernel: with arguments: Oct 2 19:43:04.765023 kernel: /init Oct 2 19:43:04.765029 kernel: with environment: Oct 2 19:43:04.765036 kernel: HOME=/ Oct 2 19:43:04.765042 kernel: TERM=linux Oct 2 19:43:04.765049 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:43:04.765058 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:43:04.765067 systemd[1]: Detected virtualization kvm. Oct 2 19:43:04.765075 systemd[1]: Detected architecture arm64. Oct 2 19:43:04.765083 systemd[1]: Running in initrd. Oct 2 19:43:04.765090 systemd[1]: No hostname configured, using default hostname. Oct 2 19:43:04.765104 systemd[1]: Hostname set to . Oct 2 19:43:04.765111 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:43:04.765119 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:43:04.765126 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:43:04.765133 systemd[1]: Reached target cryptsetup.target. Oct 2 19:43:04.765142 systemd[1]: Reached target paths.target. Oct 2 19:43:04.765149 systemd[1]: Reached target slices.target. Oct 2 19:43:04.765157 systemd[1]: Reached target swap.target. Oct 2 19:43:04.765164 systemd[1]: Reached target timers.target. Oct 2 19:43:04.765171 systemd[1]: Listening on iscsid.socket. Oct 2 19:43:04.765179 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:43:04.765186 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:43:04.765195 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:43:04.765202 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:43:04.765212 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:43:04.765219 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:43:04.765227 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:43:04.765234 systemd[1]: Reached target sockets.target. Oct 2 19:43:04.765241 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:43:04.765248 systemd[1]: Finished network-cleanup.service. Oct 2 19:43:04.765255 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:43:04.765264 systemd[1]: Starting systemd-journald.service... Oct 2 19:43:04.765271 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:43:04.765278 systemd[1]: Starting systemd-resolved.service... Oct 2 19:43:04.765285 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:43:04.765292 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:43:04.765299 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:43:04.765306 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:43:04.765313 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:43:04.765321 kernel: audit: type=1130 audit(1696275784.759:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.765330 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:43:04.765341 systemd-journald[290]: Journal started Oct 2 19:43:04.765386 systemd-journald[290]: Runtime Journal (/run/log/journal/1ea593b0c4aa472db7ceebdfca2a0338) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:43:04.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.752404 systemd-modules-load[291]: Inserted module 'overlay' Oct 2 19:43:04.767110 systemd[1]: Started systemd-journald.service. Oct 2 19:43:04.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.768340 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:43:04.774810 kernel: audit: type=1130 audit(1696275784.767:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.774828 kernel: audit: type=1130 audit(1696275784.770:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.770000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.769589 systemd-resolved[292]: Positive Trust Anchors: Oct 2 19:43:04.769596 systemd-resolved[292]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:43:04.783371 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:43:04.783395 kernel: audit: type=1130 audit(1696275784.776:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.783405 kernel: Bridge firewalling registered Oct 2 19:43:04.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.769623 systemd-resolved[292]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:43:04.774575 systemd-resolved[292]: Defaulting to hostname 'linux'. Oct 2 19:43:04.775738 systemd[1]: Started systemd-resolved.service. Oct 2 19:43:04.790000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.777143 systemd[1]: Reached target nss-lookup.target. Oct 2 19:43:04.795541 kernel: audit: type=1130 audit(1696275784.790:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.783936 systemd-modules-load[291]: Inserted module 'br_netfilter' Oct 2 19:43:04.789421 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:43:04.791752 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:43:04.799547 kernel: SCSI subsystem initialized Oct 2 19:43:04.802670 dracut-cmdline[307]: dracut-dracut-053 Oct 2 19:43:04.805381 dracut-cmdline[307]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:43:04.812668 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:43:04.812689 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:43:04.812699 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:43:04.813469 systemd-modules-load[291]: Inserted module 'dm_multipath' Oct 2 19:43:04.814682 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:43:04.818951 kernel: audit: type=1130 audit(1696275784.815:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.816347 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:43:04.826944 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:43:04.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.831588 kernel: audit: type=1130 audit(1696275784.827:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.884554 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:43:04.896546 kernel: iscsi: registered transport (tcp) Oct 2 19:43:04.912542 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:43:04.912583 kernel: QLogic iSCSI HBA Driver Oct 2 19:43:04.962791 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:43:04.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:04.965920 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:43:04.968270 kernel: audit: type=1130 audit(1696275784.963:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.013555 kernel: raid6: neonx8 gen() 13335 MB/s Oct 2 19:43:05.030554 kernel: raid6: neonx8 xor() 10742 MB/s Oct 2 19:43:05.047541 kernel: raid6: neonx4 gen() 13374 MB/s Oct 2 19:43:05.064543 kernel: raid6: neonx4 xor() 10997 MB/s Oct 2 19:43:05.081536 kernel: raid6: neonx2 gen() 13081 MB/s Oct 2 19:43:05.098545 kernel: raid6: neonx2 xor() 10108 MB/s Oct 2 19:43:05.115541 kernel: raid6: neonx1 gen() 10357 MB/s Oct 2 19:43:05.132534 kernel: raid6: neonx1 xor() 8703 MB/s Oct 2 19:43:05.149544 kernel: raid6: int64x8 gen() 6196 MB/s Oct 2 19:43:05.166540 kernel: raid6: int64x8 xor() 3495 MB/s Oct 2 19:43:05.183551 kernel: raid6: int64x4 gen() 7261 MB/s Oct 2 19:43:05.200541 kernel: raid6: int64x4 xor() 3694 MB/s Oct 2 19:43:05.217540 kernel: raid6: int64x2 gen() 5947 MB/s Oct 2 19:43:05.234541 kernel: raid6: int64x2 xor() 3245 MB/s Oct 2 19:43:05.251547 kernel: raid6: int64x1 gen() 4920 MB/s Oct 2 19:43:05.268792 kernel: raid6: int64x1 xor() 2550 MB/s Oct 2 19:43:05.268813 kernel: raid6: using algorithm neonx4 gen() 13374 MB/s Oct 2 19:43:05.268822 kernel: raid6: .... xor() 10997 MB/s, rmw enabled Oct 2 19:43:05.269918 kernel: raid6: using neon recovery algorithm Oct 2 19:43:05.280931 kernel: xor: measuring software checksum speed Oct 2 19:43:05.280948 kernel: 8regs : 17289 MB/sec Oct 2 19:43:05.281929 kernel: 32regs : 20749 MB/sec Oct 2 19:43:05.282800 kernel: arm64_neon : 27863 MB/sec Oct 2 19:43:05.283541 kernel: xor: using function: arm64_neon (27863 MB/sec) Oct 2 19:43:05.337545 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:43:05.350251 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:43:05.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.354000 audit: BPF prog-id=7 op=LOAD Oct 2 19:43:05.354536 kernel: audit: type=1130 audit(1696275785.350:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.354000 audit: BPF prog-id=8 op=LOAD Oct 2 19:43:05.354941 systemd[1]: Starting systemd-udevd.service... Oct 2 19:43:05.368421 systemd-udevd[492]: Using default interface naming scheme 'v252'. Oct 2 19:43:05.371717 systemd[1]: Started systemd-udevd.service. Oct 2 19:43:05.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.373983 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:43:05.387079 dracut-pre-trigger[500]: rd.md=0: removing MD RAID activation Oct 2 19:43:05.421916 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:43:05.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.423556 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:43:05.461743 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:43:05.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:05.495215 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:43:05.499540 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:05.530621 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:43:05.532336 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (544) Oct 2 19:43:05.535216 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:43:05.536258 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:43:05.544883 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:43:05.548613 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:43:05.552262 systemd[1]: Starting disk-uuid.service... Oct 2 19:43:05.561536 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:06.574549 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:43:06.574874 disk-uuid[564]: The operation has completed successfully. Oct 2 19:43:06.603027 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:43:06.603133 systemd[1]: Finished disk-uuid.service. Oct 2 19:43:06.604947 systemd[1]: Starting verity-setup.service... Oct 2 19:43:06.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.603000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.623569 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:43:06.658299 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:43:06.659910 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:43:06.660813 systemd[1]: Finished verity-setup.service. Oct 2 19:43:06.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.719552 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:43:06.720455 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:43:06.721363 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:43:06.722051 systemd[1]: Starting ignition-setup.service... Oct 2 19:43:06.725496 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:43:06.735930 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:43:06.735986 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:43:06.735996 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:43:06.756897 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:43:06.830959 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:43:06.831000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.832000 audit: BPF prog-id=9 op=LOAD Oct 2 19:43:06.849439 systemd[1]: Starting systemd-networkd.service... Oct 2 19:43:06.855970 systemd[1]: Finished ignition-setup.service. Oct 2 19:43:06.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.857693 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:43:06.882884 systemd-networkd[734]: lo: Link UP Oct 2 19:43:06.882899 systemd-networkd[734]: lo: Gained carrier Oct 2 19:43:06.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.883885 systemd-networkd[734]: Enumeration completed Oct 2 19:43:06.884009 systemd[1]: Started systemd-networkd.service. Oct 2 19:43:06.884263 systemd-networkd[734]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:43:06.884983 systemd[1]: Reached target network.target. Oct 2 19:43:06.886620 systemd-networkd[734]: eth0: Link UP Oct 2 19:43:06.886624 systemd-networkd[734]: eth0: Gained carrier Oct 2 19:43:06.887256 systemd[1]: Starting iscsiuio.service... Oct 2 19:43:06.902542 systemd[1]: Started iscsiuio.service. Oct 2 19:43:06.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.904314 systemd[1]: Starting iscsid.service... Oct 2 19:43:06.907800 systemd-networkd[734]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:43:06.909616 iscsid[741]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:43:06.909616 iscsid[741]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:43:06.909616 iscsid[741]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:43:06.909616 iscsid[741]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:43:06.909616 iscsid[741]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:43:06.909616 iscsid[741]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:43:06.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.913369 systemd[1]: Started iscsid.service. Oct 2 19:43:06.918598 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:43:06.934000 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:43:06.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.935110 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:43:06.936665 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:43:06.938298 systemd[1]: Reached target remote-fs.target. Oct 2 19:43:06.940686 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:43:06.950063 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:43:06.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:06.967313 ignition[736]: Ignition 2.14.0 Oct 2 19:43:06.967322 ignition[736]: Stage: fetch-offline Oct 2 19:43:06.967361 ignition[736]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:06.967370 ignition[736]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:06.967509 ignition[736]: parsed url from cmdline: "" Oct 2 19:43:06.967512 ignition[736]: no config URL provided Oct 2 19:43:06.967517 ignition[736]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:43:06.967545 ignition[736]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:43:06.967564 ignition[736]: op(1): [started] loading QEMU firmware config module Oct 2 19:43:06.967568 ignition[736]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:43:06.976683 ignition[736]: op(1): [finished] loading QEMU firmware config module Oct 2 19:43:06.994752 ignition[736]: parsing config with SHA512: 287da9d786e8c3e74ead3a1032b32c0d15015f01c4169e201ae69ace9b8963f34895b2e6ff47e35f591d4a9f976a0b821d19bf83e70e80844519518fafa34745 Oct 2 19:43:07.012481 systemd-resolved[292]: Detected conflict on linux IN A 10.0.0.11 Oct 2 19:43:07.012498 systemd-resolved[292]: Hostname conflict, changing published hostname from 'linux' to 'linux3'. Oct 2 19:43:07.013135 ignition[736]: fetch-offline: fetch-offline passed Oct 2 19:43:07.016000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.012622 unknown[736]: fetched base config from "system" Oct 2 19:43:07.013211 ignition[736]: Ignition finished successfully Oct 2 19:43:07.012629 unknown[736]: fetched user config from "qemu" Oct 2 19:43:07.014650 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:43:07.016848 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:43:07.017663 systemd[1]: Starting ignition-kargs.service... Oct 2 19:43:07.027706 ignition[762]: Ignition 2.14.0 Oct 2 19:43:07.027716 ignition[762]: Stage: kargs Oct 2 19:43:07.027808 ignition[762]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:07.027818 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:07.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.030044 systemd[1]: Finished ignition-kargs.service. Oct 2 19:43:07.028583 ignition[762]: kargs: kargs passed Oct 2 19:43:07.032169 systemd[1]: Starting ignition-disks.service... Oct 2 19:43:07.028624 ignition[762]: Ignition finished successfully Oct 2 19:43:07.040851 ignition[768]: Ignition 2.14.0 Oct 2 19:43:07.040864 ignition[768]: Stage: disks Oct 2 19:43:07.042631 systemd[1]: Finished ignition-disks.service. Oct 2 19:43:07.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.040958 ignition[768]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:07.043926 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:43:07.040967 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:07.045218 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:43:07.041755 ignition[768]: disks: disks passed Oct 2 19:43:07.046643 systemd[1]: Reached target local-fs.target. Oct 2 19:43:07.041799 ignition[768]: Ignition finished successfully Oct 2 19:43:07.048042 systemd[1]: Reached target sysinit.target. Oct 2 19:43:07.049634 systemd[1]: Reached target basic.target. Oct 2 19:43:07.051835 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:43:07.064944 systemd-fsck[777]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:43:07.069399 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:43:07.071000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.072849 systemd[1]: Mounting sysroot.mount... Oct 2 19:43:07.083531 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:43:07.084004 systemd[1]: Mounted sysroot.mount. Oct 2 19:43:07.084825 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:43:07.087136 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:43:07.088070 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:43:07.088134 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:43:07.088164 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:43:07.090811 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:43:07.093587 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:43:07.099533 initrd-setup-root[787]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:43:07.104941 initrd-setup-root[795]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:43:07.113135 initrd-setup-root[803]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:43:07.117049 initrd-setup-root[811]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:43:07.150168 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:43:07.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.152804 systemd[1]: Starting ignition-mount.service... Oct 2 19:43:07.154395 systemd[1]: Starting sysroot-boot.service... Oct 2 19:43:07.160432 bash[828]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:43:07.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.171853 systemd[1]: Finished sysroot-boot.service. Oct 2 19:43:07.173606 ignition[830]: INFO : Ignition 2.14.0 Oct 2 19:43:07.173606 ignition[830]: INFO : Stage: mount Oct 2 19:43:07.173606 ignition[830]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:07.173606 ignition[830]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:07.173606 ignition[830]: INFO : mount: mount passed Oct 2 19:43:07.173606 ignition[830]: INFO : Ignition finished successfully Oct 2 19:43:07.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:07.174765 systemd[1]: Finished ignition-mount.service. Oct 2 19:43:07.673856 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:43:07.692024 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (838) Oct 2 19:43:07.694164 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:43:07.694180 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:43:07.694189 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:43:07.702227 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:43:07.703961 systemd[1]: Starting ignition-files.service... Oct 2 19:43:07.721767 ignition[858]: INFO : Ignition 2.14.0 Oct 2 19:43:07.721767 ignition[858]: INFO : Stage: files Oct 2 19:43:07.723500 ignition[858]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:07.723500 ignition[858]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:07.723500 ignition[858]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:43:07.727379 ignition[858]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:43:07.727379 ignition[858]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:43:07.730169 ignition[858]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:43:07.731705 ignition[858]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:43:07.733079 ignition[858]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:43:07.732185 unknown[858]: wrote ssh authorized keys file for user: core Oct 2 19:43:07.736043 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:43:07.736043 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:43:07.913013 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:43:08.228086 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:43:08.231171 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:43:08.231171 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:43:08.231171 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-arm64.tar.gz: attempt #1 Oct 2 19:43:08.245979 systemd-networkd[734]: eth0: Gained IPv6LL Oct 2 19:43:08.312273 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:43:08.388570 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: ebd055e9b2888624d006decd582db742131ed815d059d529ba21eaf864becca98a84b20a10eec91051b9d837c6855d28d5042bf5e9a454f4540aec6b82d37e96 Oct 2 19:43:08.391537 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-arm64.tar.gz" Oct 2 19:43:08.391537 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:43:08.391537 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:43:08.433680 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:43:08.723772 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: daab8965a4f617d1570d04c031ab4d55fff6aa13a61f0e4045f2338947f9fb0ee3a80fdee57cfe86db885390595460342181e1ec52b89f127ef09c393ae3db7f Oct 2 19:43:08.723772 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:43:08.728416 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:43:08.728416 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:43:08.759135 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:43:09.752244 ignition[858]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 7b872a34d86e8aa75455a62a20f5cf16426de2ae54ffb8e0250fead920838df818201b8512c2f8bf4c939e5b21babab371f3a48803e2e861da9e6f8cdd022324 Oct 2 19:43:09.757377 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:43:09.757377 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:43:09.757377 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:43:09.757377 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:43:09.757377 ignition[858]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:43:09.757377 ignition[858]: INFO : files: op(f): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:43:09.795623 ignition[858]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:43:09.795623 ignition[858]: INFO : files: op(10): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:43:09.795623 ignition[858]: INFO : files: op(10): op(11): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:43:09.801683 ignition[858]: INFO : files: op(10): op(11): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:43:09.805809 ignition[858]: INFO : files: op(10): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:43:09.805809 ignition[858]: INFO : files: op(12): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:43:09.805809 ignition[858]: INFO : files: op(12): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:43:09.805809 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:43:09.805809 ignition[858]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:43:09.805809 ignition[858]: INFO : files: files passed Oct 2 19:43:09.805809 ignition[858]: INFO : Ignition finished successfully Oct 2 19:43:09.824923 kernel: kauditd_printk_skb: 23 callbacks suppressed Oct 2 19:43:09.824947 kernel: audit: type=1130 audit(1696275789.806:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.824959 kernel: audit: type=1130 audit(1696275789.821:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.824969 kernel: audit: type=1131 audit(1696275789.821:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.821000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.805708 systemd[1]: Finished ignition-files.service. Oct 2 19:43:09.831600 kernel: audit: type=1130 audit(1696275789.828:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.807624 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:43:09.812011 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:43:09.835722 initrd-setup-root-after-ignition[883]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:43:09.812969 systemd[1]: Starting ignition-quench.service... Oct 2 19:43:09.838782 initrd-setup-root-after-ignition[886]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:43:09.820387 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:43:09.820495 systemd[1]: Finished ignition-quench.service. Oct 2 19:43:09.822198 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:43:09.828422 systemd[1]: Reached target ignition-complete.target. Oct 2 19:43:09.833428 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:43:09.855392 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:43:09.855496 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:43:09.862656 kernel: audit: type=1130 audit(1696275789.857:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.862722 kernel: audit: type=1131 audit(1696275789.857:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.857329 systemd[1]: Reached target initrd-fs.target. Oct 2 19:43:09.863286 systemd[1]: Reached target initrd.target. Oct 2 19:43:09.864993 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:43:09.865972 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:43:09.879229 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:43:09.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.881235 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:43:09.885023 kernel: audit: type=1130 audit(1696275789.880:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.890542 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:43:09.891515 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:43:09.893221 systemd[1]: Stopped target timers.target. Oct 2 19:43:09.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.894087 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:43:09.902424 kernel: audit: type=1131 audit(1696275789.897:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.894221 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:43:09.897994 systemd[1]: Stopped target initrd.target. Oct 2 19:43:09.901632 systemd[1]: Stopped target basic.target. Oct 2 19:43:09.903230 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:43:09.904715 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:43:09.906152 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:43:09.907630 systemd[1]: Stopped target remote-fs.target. Oct 2 19:43:09.909088 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:43:09.910516 systemd[1]: Stopped target sysinit.target. Oct 2 19:43:09.911918 systemd[1]: Stopped target local-fs.target. Oct 2 19:43:09.913266 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:43:09.916000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.914768 systemd[1]: Stopped target swap.target. Oct 2 19:43:09.921784 kernel: audit: type=1131 audit(1696275789.916:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.916006 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:43:09.922000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.916144 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:43:09.927243 kernel: audit: type=1131 audit(1696275789.922:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.926000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.917495 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:43:09.921237 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:43:09.921350 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:43:09.922700 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:43:09.922819 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:43:09.926745 systemd[1]: Stopped target paths.target. Oct 2 19:43:09.927981 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:43:09.931566 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:43:09.933149 systemd[1]: Stopped target slices.target. Oct 2 19:43:09.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.934401 systemd[1]: Stopped target sockets.target. Oct 2 19:43:09.938000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.936064 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:43:09.936285 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:43:09.942401 iscsid[741]: iscsid shutting down. Oct 2 19:43:09.937756 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:43:09.937857 systemd[1]: Stopped ignition-files.service. Oct 2 19:43:09.940102 systemd[1]: Stopping ignition-mount.service... Oct 2 19:43:09.941675 systemd[1]: Stopping iscsid.service... Oct 2 19:43:09.945000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.947000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.943759 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:43:09.945087 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:43:09.945253 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:43:09.950000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.952105 ignition[899]: INFO : Ignition 2.14.0 Oct 2 19:43:09.952105 ignition[899]: INFO : Stage: umount Oct 2 19:43:09.952105 ignition[899]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:43:09.952105 ignition[899]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:43:09.952105 ignition[899]: INFO : umount: umount passed Oct 2 19:43:09.952105 ignition[899]: INFO : Ignition finished successfully Oct 2 19:43:09.957000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.946683 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:43:09.946774 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:43:09.949790 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:43:09.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.949893 systemd[1]: Stopped iscsid.service. Oct 2 19:43:09.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.951648 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:43:09.969000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.951716 systemd[1]: Closed iscsid.socket. Oct 2 19:43:09.953455 systemd[1]: Stopping iscsiuio.service... Oct 2 19:43:09.956344 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:43:09.956885 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:43:09.956972 systemd[1]: Stopped iscsiuio.service. Oct 2 19:43:09.958252 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:43:09.958334 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:43:09.960259 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:43:09.960338 systemd[1]: Stopped ignition-mount.service. Oct 2 19:43:09.978000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.962245 systemd[1]: Stopped target network.target. Oct 2 19:43:09.963187 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:43:09.963238 systemd[1]: Closed iscsiuio.socket. Oct 2 19:43:09.964853 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:43:09.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.964906 systemd[1]: Stopped ignition-disks.service. Oct 2 19:43:09.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.966418 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:43:09.966461 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:43:09.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.967932 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:43:09.967971 systemd[1]: Stopped ignition-setup.service. Oct 2 19:43:09.969315 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:43:09.970874 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:43:09.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.975579 systemd-networkd[734]: eth0: DHCPv6 lease lost Oct 2 19:43:09.996000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:43:09.977255 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:43:09.977348 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:43:10.002000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:43:10.002000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.979571 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:43:10.003000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.979615 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:43:09.981721 systemd[1]: Stopping network-cleanup.service... Oct 2 19:43:09.983157 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:43:10.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.983235 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:43:10.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.985260 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:43:10.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.985314 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:43:09.987485 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:43:09.987549 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:43:09.989602 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:43:10.015000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.993636 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:43:10.019000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:09.994248 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:43:09.994352 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:43:10.022000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.000637 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:43:10.000770 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:43:10.002903 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:43:10.002988 systemd[1]: Stopped network-cleanup.service. Oct 2 19:43:10.004223 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:43:10.004259 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:43:10.005798 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:43:10.005831 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:43:10.007305 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:43:10.007352 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:43:10.008916 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:43:10.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.008958 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:43:10.010608 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:43:10.036000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.010650 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:43:10.013039 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:43:10.013948 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:43:10.014014 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:43:10.016585 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:43:10.016625 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:43:10.018235 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:43:10.018283 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:43:10.020789 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:43:10.021297 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:43:10.021384 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:43:10.032824 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:43:10.032920 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:43:10.034215 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:43:10.035643 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:43:10.035699 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:43:10.037945 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:43:10.045770 systemd[1]: Switching root. Oct 2 19:43:10.062088 systemd-journald[290]: Journal stopped Oct 2 19:43:12.186384 systemd-journald[290]: Received SIGTERM from PID 1 (systemd). Oct 2 19:43:12.186505 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:43:12.186538 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:43:12.186549 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:43:12.186559 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:43:12.186569 kernel: SELinux: policy capability open_perms=1 Oct 2 19:43:12.186578 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:43:12.186588 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:43:12.186599 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:43:12.186608 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:43:12.186628 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:43:12.186639 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:43:12.186650 systemd[1]: Successfully loaded SELinux policy in 35.096ms. Oct 2 19:43:12.186672 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.278ms. Oct 2 19:43:12.186685 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:43:12.186696 systemd[1]: Detected virtualization kvm. Oct 2 19:43:12.186706 systemd[1]: Detected architecture arm64. Oct 2 19:43:12.186717 systemd[1]: Detected first boot. Oct 2 19:43:12.186728 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:43:12.186739 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:43:12.186750 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:12.186762 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:12.186774 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:12.186785 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:43:12.186796 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:43:12.186808 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:43:12.186819 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:43:12.186839 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:43:12.186850 systemd[1]: Created slice system-getty.slice. Oct 2 19:43:12.186860 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:43:12.186870 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:43:12.186881 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:43:12.186892 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:43:12.186903 systemd[1]: Created slice user.slice. Oct 2 19:43:12.186915 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:43:12.186925 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:43:12.186936 systemd[1]: Set up automount boot.automount. Oct 2 19:43:12.186946 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:43:12.186957 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:43:12.186967 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:43:12.186977 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:43:12.186988 systemd[1]: Reached target integritysetup.target. Oct 2 19:43:12.187000 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:43:12.187011 systemd[1]: Reached target remote-fs.target. Oct 2 19:43:12.187022 systemd[1]: Reached target slices.target. Oct 2 19:43:12.187032 systemd[1]: Reached target swap.target. Oct 2 19:43:12.187042 systemd[1]: Reached target torcx.target. Oct 2 19:43:12.187052 systemd[1]: Reached target veritysetup.target. Oct 2 19:43:12.187063 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:43:12.187079 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:43:12.187089 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:43:12.187102 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:43:12.187112 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:43:12.187122 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:43:12.187133 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:43:12.187144 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:43:12.187154 systemd[1]: Mounting media.mount... Oct 2 19:43:12.187165 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:43:12.187175 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:43:12.187185 systemd[1]: Mounting tmp.mount... Oct 2 19:43:12.187196 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:43:12.187208 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:43:12.187219 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:43:12.187230 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:43:12.187240 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:43:12.187250 systemd[1]: Starting modprobe@drm.service... Oct 2 19:43:12.187261 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:43:12.187271 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:43:12.187281 systemd[1]: Starting modprobe@loop.service... Oct 2 19:43:12.187291 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:43:12.187305 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:43:12.187316 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:43:12.187326 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:43:12.187336 kernel: fuse: init (API version 7.34) Oct 2 19:43:12.187346 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:43:12.187355 kernel: loop: module loaded Oct 2 19:43:12.187365 systemd[1]: Stopped systemd-journald.service. Oct 2 19:43:12.187376 systemd[1]: Starting systemd-journald.service... Oct 2 19:43:12.187386 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:43:12.187397 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:43:12.187407 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:43:12.187418 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:43:12.187428 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:43:12.187438 systemd[1]: Stopped verity-setup.service. Oct 2 19:43:12.187449 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:43:12.187459 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:43:12.187470 systemd[1]: Mounted media.mount. Oct 2 19:43:12.187480 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:43:12.187492 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:43:12.187502 systemd[1]: Mounted tmp.mount. Oct 2 19:43:12.187513 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:43:12.187533 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:43:12.187548 systemd-journald[1007]: Journal started Oct 2 19:43:12.187591 systemd-journald[1007]: Runtime Journal (/run/log/journal/1ea593b0c4aa472db7ceebdfca2a0338) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:43:10.132000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:43:10.296000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:43:10.296000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:43:10.296000 audit: BPF prog-id=10 op=LOAD Oct 2 19:43:10.296000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:43:10.296000 audit: BPF prog-id=11 op=LOAD Oct 2 19:43:10.296000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:43:12.046000 audit: BPF prog-id=12 op=LOAD Oct 2 19:43:12.046000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:43:12.046000 audit: BPF prog-id=13 op=LOAD Oct 2 19:43:12.046000 audit: BPF prog-id=14 op=LOAD Oct 2 19:43:12.046000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:43:12.046000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:43:12.047000 audit: BPF prog-id=15 op=LOAD Oct 2 19:43:12.047000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:43:12.047000 audit: BPF prog-id=16 op=LOAD Oct 2 19:43:12.047000 audit: BPF prog-id=17 op=LOAD Oct 2 19:43:12.047000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:43:12.047000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:43:12.048000 audit: BPF prog-id=18 op=LOAD Oct 2 19:43:12.048000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:43:12.048000 audit: BPF prog-id=19 op=LOAD Oct 2 19:43:12.048000 audit: BPF prog-id=20 op=LOAD Oct 2 19:43:12.048000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:43:12.048000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:43:12.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.056000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:43:12.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.153000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.154000 audit: BPF prog-id=21 op=LOAD Oct 2 19:43:12.154000 audit: BPF prog-id=22 op=LOAD Oct 2 19:43:12.154000 audit: BPF prog-id=23 op=LOAD Oct 2 19:43:12.154000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:43:12.154000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:43:12.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.184000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:43:12.184000 audit[1007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffdc7cb840 a2=4000 a3=1 items=0 ppid=1 pid=1007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:12.184000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:43:12.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.341278 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:12.045361 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:43:10.341880 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:43:12.045375 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:43:10.341901 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:43:12.049059 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:43:10.341931 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:43:10.341942 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:43:10.341972 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:43:10.341985 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:43:10.342216 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:43:10.342253 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:43:10.342265 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:43:10.343010 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:43:10.343050 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:43:10.343070 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:43:12.189597 systemd[1]: Started systemd-journald.service. Oct 2 19:43:10.343094 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:43:10.343113 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:43:12.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:10.343128 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:10Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:43:11.781458 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:11.781735 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:11.781838 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:11.782110 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:43:11.782163 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:43:11.782218 /usr/lib/systemd/system-generators/torcx-generator[933]: time="2023-10-02T19:43:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:43:12.190340 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:43:12.190510 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:43:12.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.191761 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:43:12.191915 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:43:12.192000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.193082 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:43:12.193257 systemd[1]: Finished modprobe@drm.service. Oct 2 19:43:12.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.194352 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:43:12.194502 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:43:12.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.194000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.195753 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:43:12.196109 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:43:12.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.197361 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:43:12.197514 systemd[1]: Finished modprobe@loop.service. Oct 2 19:43:12.197000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.197000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.198655 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:43:12.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.199984 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:43:12.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.201287 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:43:12.202000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.202710 systemd[1]: Reached target network-pre.target. Oct 2 19:43:12.204937 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:43:12.206937 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:43:12.207735 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:43:12.209632 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:43:12.211508 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:43:12.212643 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:43:12.213755 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:43:12.214648 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:43:12.218899 systemd-journald[1007]: Time spent on flushing to /var/log/journal/1ea593b0c4aa472db7ceebdfca2a0338 is 17.582ms for 993 entries. Oct 2 19:43:12.218899 systemd-journald[1007]: System Journal (/var/log/journal/1ea593b0c4aa472db7ceebdfca2a0338) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:43:12.256387 systemd-journald[1007]: Received client request to flush runtime journal. Oct 2 19:43:12.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.215858 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:43:12.217742 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:43:12.220370 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:43:12.222516 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:43:12.257266 udevadm[1033]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:43:12.223633 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:43:12.225895 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:43:12.230848 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:43:12.232235 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:43:12.238713 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:43:12.244496 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:43:12.246643 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:43:12.257452 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:43:12.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.280433 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:43:12.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.614183 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:43:12.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.615000 audit: BPF prog-id=24 op=LOAD Oct 2 19:43:12.615000 audit: BPF prog-id=25 op=LOAD Oct 2 19:43:12.615000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:43:12.615000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:43:12.616689 systemd[1]: Starting systemd-udevd.service... Oct 2 19:43:12.639156 systemd-udevd[1038]: Using default interface naming scheme 'v252'. Oct 2 19:43:12.654072 systemd[1]: Started systemd-udevd.service. Oct 2 19:43:12.654000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.655000 audit: BPF prog-id=26 op=LOAD Oct 2 19:43:12.657722 systemd[1]: Starting systemd-networkd.service... Oct 2 19:43:12.663000 audit: BPF prog-id=27 op=LOAD Oct 2 19:43:12.663000 audit: BPF prog-id=28 op=LOAD Oct 2 19:43:12.663000 audit: BPF prog-id=29 op=LOAD Oct 2 19:43:12.664775 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:43:12.679845 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:43:12.698827 systemd[1]: Started systemd-userdbd.service. Oct 2 19:43:12.699000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.728493 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:43:12.758025 systemd-networkd[1046]: lo: Link UP Oct 2 19:43:12.758323 systemd-networkd[1046]: lo: Gained carrier Oct 2 19:43:12.758747 systemd-networkd[1046]: Enumeration completed Oct 2 19:43:12.758946 systemd[1]: Started systemd-networkd.service. Oct 2 19:43:12.759044 systemd-networkd[1046]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:43:12.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.762403 systemd-networkd[1046]: eth0: Link UP Oct 2 19:43:12.762512 systemd-networkd[1046]: eth0: Gained carrier Oct 2 19:43:12.766898 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:43:12.767000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.769238 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:43:12.779034 lvm[1071]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:43:12.784660 systemd-networkd[1046]: eth0: DHCPv4 address 10.0.0.11/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:43:12.808467 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:43:12.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.809628 systemd[1]: Reached target cryptsetup.target. Oct 2 19:43:12.811752 systemd[1]: Starting lvm2-activation.service... Oct 2 19:43:12.815939 lvm[1072]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:43:12.844504 systemd[1]: Finished lvm2-activation.service. Oct 2 19:43:12.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.845508 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:43:12.846369 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:43:12.846405 systemd[1]: Reached target local-fs.target. Oct 2 19:43:12.847239 systemd[1]: Reached target machines.target. Oct 2 19:43:12.849333 systemd[1]: Starting ldconfig.service... Oct 2 19:43:12.850567 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:43:12.850627 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:12.852014 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:43:12.854044 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:43:12.856432 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:43:12.857578 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:43:12.857621 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:43:12.858669 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:43:12.860971 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1074 (bootctl) Oct 2 19:43:12.862217 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:43:12.875634 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:43:12.880000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:12.883239 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:43:12.887228 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:43:12.889356 systemd-tmpfiles[1077]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:43:13.010915 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:43:13.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.030552 systemd-fsck[1083]: fsck.fat 4.2 (2021-01-31) Oct 2 19:43:13.030552 systemd-fsck[1083]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:43:13.032466 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:43:13.033000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.140798 ldconfig[1073]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:43:13.144057 systemd[1]: Finished ldconfig.service. Oct 2 19:43:13.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.174860 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:43:13.176341 systemd[1]: Mounting boot.mount... Oct 2 19:43:13.183852 systemd[1]: Mounted boot.mount. Oct 2 19:43:13.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.191131 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:43:13.244604 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:43:13.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.248951 systemd[1]: Starting audit-rules.service... Oct 2 19:43:13.250771 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:43:13.252616 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:43:13.253000 audit: BPF prog-id=30 op=LOAD Oct 2 19:43:13.256000 audit: BPF prog-id=31 op=LOAD Oct 2 19:43:13.255376 systemd[1]: Starting systemd-resolved.service... Oct 2 19:43:13.258509 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:43:13.260964 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:43:13.269608 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:43:13.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.270748 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:43:13.275000 audit[1097]: SYSTEM_BOOT pid=1097 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.280276 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:43:13.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.283785 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:43:13.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.286169 systemd[1]: Starting systemd-update-done.service... Oct 2 19:43:13.293500 systemd[1]: Finished systemd-update-done.service. Oct 2 19:43:13.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:13.294000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:43:13.294000 audit[1107]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff5eb8940 a2=420 a3=0 items=0 ppid=1086 pid=1107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:13.294000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:43:13.294980 augenrules[1107]: No rules Oct 2 19:43:13.296241 systemd[1]: Finished audit-rules.service. Oct 2 19:43:13.314484 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:43:13.314496 systemd-resolved[1090]: Positive Trust Anchors: Oct 2 19:43:13.314506 systemd-resolved[1090]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:43:13.314560 systemd-resolved[1090]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:43:13.315667 systemd[1]: Reached target time-set.target. Oct 2 19:43:13.811967 systemd-timesyncd[1096]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:43:13.812075 systemd-timesyncd[1096]: Initial clock synchronization to Mon 2023-10-02 19:43:13.811868 UTC. Oct 2 19:43:13.823863 systemd-resolved[1090]: Defaulting to hostname 'linux'. Oct 2 19:43:13.825352 systemd[1]: Started systemd-resolved.service. Oct 2 19:43:13.826335 systemd[1]: Reached target network.target. Oct 2 19:43:13.827144 systemd[1]: Reached target nss-lookup.target. Oct 2 19:43:13.827962 systemd[1]: Reached target sysinit.target. Oct 2 19:43:13.828840 systemd[1]: Started motdgen.path. Oct 2 19:43:13.829572 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:43:13.830795 systemd[1]: Started logrotate.timer. Oct 2 19:43:13.831632 systemd[1]: Started mdadm.timer. Oct 2 19:43:13.832300 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:43:13.833161 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:43:13.833197 systemd[1]: Reached target paths.target. Oct 2 19:43:13.833936 systemd[1]: Reached target timers.target. Oct 2 19:43:13.835071 systemd[1]: Listening on dbus.socket. Oct 2 19:43:13.836913 systemd[1]: Starting docker.socket... Oct 2 19:43:13.840430 systemd[1]: Listening on sshd.socket. Oct 2 19:43:13.841303 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:13.841799 systemd[1]: Listening on docker.socket. Oct 2 19:43:13.842719 systemd[1]: Reached target sockets.target. Oct 2 19:43:13.843472 systemd[1]: Reached target basic.target. Oct 2 19:43:13.844255 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:43:13.844287 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:43:13.845350 systemd[1]: Starting containerd.service... Oct 2 19:43:13.847270 systemd[1]: Starting dbus.service... Oct 2 19:43:13.849171 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:43:13.851265 systemd[1]: Starting extend-filesystems.service... Oct 2 19:43:13.852096 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:43:13.853330 systemd[1]: Starting motdgen.service... Oct 2 19:43:13.855167 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:43:13.859702 systemd[1]: Starting prepare-critools.service... Oct 2 19:43:13.861993 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:43:13.863817 systemd[1]: Starting sshd-keygen.service... Oct 2 19:43:13.865360 jq[1118]: false Oct 2 19:43:13.867709 systemd[1]: Starting systemd-logind.service... Oct 2 19:43:13.868801 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:43:13.868862 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:43:13.870289 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:43:13.870947 systemd[1]: Starting update-engine.service... Oct 2 19:43:13.872791 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:43:13.875544 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:43:13.875806 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:43:13.876117 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:43:13.876682 jq[1135]: true Oct 2 19:43:13.876266 systemd[1]: Finished motdgen.service. Oct 2 19:43:13.879933 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:43:13.880193 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:43:13.889606 tar[1138]: ./ Oct 2 19:43:13.889606 tar[1138]: ./macvlan Oct 2 19:43:13.891292 tar[1140]: crictl Oct 2 19:43:13.892525 jq[1142]: true Oct 2 19:43:13.906785 extend-filesystems[1119]: Found vda Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda1 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda2 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda3 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found usr Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda4 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda6 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda7 Oct 2 19:43:13.907980 extend-filesystems[1119]: Found vda9 Oct 2 19:43:13.907980 extend-filesystems[1119]: Checking size of /dev/vda9 Oct 2 19:43:13.922313 dbus-daemon[1117]: [system] SELinux support is enabled Oct 2 19:43:13.922526 systemd[1]: Started dbus.service. Oct 2 19:43:13.925724 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:43:13.925803 systemd[1]: Reached target system-config.target. Oct 2 19:43:13.926906 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:43:13.926947 systemd[1]: Reached target user-config.target. Oct 2 19:43:13.934063 extend-filesystems[1119]: Old size kept for /dev/vda9 Oct 2 19:43:13.934232 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:43:13.934390 systemd[1]: Finished extend-filesystems.service. Oct 2 19:43:13.957229 tar[1138]: ./static Oct 2 19:43:13.976730 systemd-logind[1133]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:43:13.976915 systemd-logind[1133]: New seat seat0. Oct 2 19:43:13.988226 systemd[1]: Started systemd-logind.service. Oct 2 19:43:13.998297 bash[1170]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:43:13.999292 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:43:14.003145 tar[1138]: ./vlan Oct 2 19:43:14.004078 update_engine[1134]: I1002 19:43:14.003841 1134 main.cc:92] Flatcar Update Engine starting Oct 2 19:43:14.013443 systemd[1]: Started update-engine.service. Oct 2 19:43:14.013794 update_engine[1134]: I1002 19:43:14.013760 1134 update_check_scheduler.cc:74] Next update check in 3m54s Oct 2 19:43:14.019877 systemd[1]: Started locksmithd.service. Oct 2 19:43:14.042529 tar[1138]: ./portmap Oct 2 19:43:14.063322 env[1145]: time="2023-10-02T19:43:14.063215888Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:43:14.082692 tar[1138]: ./host-local Oct 2 19:43:14.088816 env[1145]: time="2023-10-02T19:43:14.088766328Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:43:14.089122 env[1145]: time="2023-10-02T19:43:14.089098688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.096918328Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.096956168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097214968Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097235528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097249208Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097258368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097332648Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.097799 env[1145]: time="2023-10-02T19:43:14.097758688Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:43:14.098217 env[1145]: time="2023-10-02T19:43:14.098193128Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:43:14.098283 env[1145]: time="2023-10-02T19:43:14.098269528Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:43:14.098396 env[1145]: time="2023-10-02T19:43:14.098377368Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:43:14.098469 env[1145]: time="2023-10-02T19:43:14.098455808Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:43:14.102552 env[1145]: time="2023-10-02T19:43:14.102454328Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:43:14.102552 env[1145]: time="2023-10-02T19:43:14.102491408Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.102659688Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.102702888Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.102719128Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.102733008Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.102745648Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103091008Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103114008Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103127568Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103139408Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103152448Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103269008Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103340488Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103596168Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:43:14.104618 env[1145]: time="2023-10-02T19:43:14.103622688Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103638368Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103805168Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103818848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103831848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103844048Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103856528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103869528Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103881888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103900648Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.103918608Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.104059368Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.104194928Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.104210848Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.104931 env[1145]: time="2023-10-02T19:43:14.104223128Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:43:14.105195 env[1145]: time="2023-10-02T19:43:14.104237448Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:43:14.105195 env[1145]: time="2023-10-02T19:43:14.104248688Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:43:14.105195 env[1145]: time="2023-10-02T19:43:14.104266608Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:43:14.105195 env[1145]: time="2023-10-02T19:43:14.104302728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:43:14.105273 env[1145]: time="2023-10-02T19:43:14.104534448Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:43:14.105273 env[1145]: time="2023-10-02T19:43:14.104589688Z" level=info msg="Connect containerd service" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.105752288Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.108817208Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109013488Z" level=info msg="Start subscribing containerd event" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109063088Z" level=info msg="Start recovering state" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109126848Z" level=info msg="Start event monitor" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109150328Z" level=info msg="Start snapshots syncer" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109163128Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109170088Z" level=info msg="Start streaming server" Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109620848Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109665728Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:43:14.110793 env[1145]: time="2023-10-02T19:43:14.109755488Z" level=info msg="containerd successfully booted in 0.049844s" Oct 2 19:43:14.109863 systemd[1]: Started containerd.service. Oct 2 19:43:14.111351 tar[1138]: ./vrf Oct 2 19:43:14.145847 tar[1138]: ./bridge Oct 2 19:43:14.165569 locksmithd[1171]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:43:14.181885 tar[1138]: ./tuning Oct 2 19:43:14.210345 tar[1138]: ./firewall Oct 2 19:43:14.233618 systemd[1]: Finished prepare-critools.service. Oct 2 19:43:14.246275 tar[1138]: ./host-device Oct 2 19:43:14.277423 tar[1138]: ./sbr Oct 2 19:43:14.305493 tar[1138]: ./loopback Oct 2 19:43:14.332118 tar[1138]: ./dhcp Oct 2 19:43:14.397249 tar[1138]: ./ptp Oct 2 19:43:14.425191 tar[1138]: ./ipvlan Oct 2 19:43:14.436709 systemd-networkd[1046]: eth0: Gained IPv6LL Oct 2 19:43:14.452800 tar[1138]: ./bandwidth Oct 2 19:43:14.493168 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:43:15.904866 sshd_keygen[1141]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:43:15.925281 systemd[1]: Finished sshd-keygen.service. Oct 2 19:43:15.927612 systemd[1]: Starting issuegen.service... Oct 2 19:43:15.933330 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:43:15.933488 systemd[1]: Finished issuegen.service. Oct 2 19:43:15.935859 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:43:15.943053 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:43:15.945617 systemd[1]: Started getty@tty1.service. Oct 2 19:43:15.947876 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:43:15.949107 systemd[1]: Reached target getty.target. Oct 2 19:43:15.949964 systemd[1]: Reached target multi-user.target. Oct 2 19:43:15.952185 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:43:15.959756 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:43:15.959933 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:43:15.961119 systemd[1]: Startup finished in 656ms (kernel) + 5.511s (initrd) + 5.371s (userspace) = 11.539s. Oct 2 19:43:17.579824 systemd[1]: Created slice system-sshd.slice. Oct 2 19:43:17.580886 systemd[1]: Started sshd@0-10.0.0.11:22-10.0.0.1:46986.service. Oct 2 19:43:17.633727 sshd[1199]: Accepted publickey for core from 10.0.0.1 port 46986 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:17.637861 sshd[1199]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:17.648614 systemd-logind[1133]: New session 1 of user core. Oct 2 19:43:17.649496 systemd[1]: Created slice user-500.slice. Oct 2 19:43:17.650730 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:43:17.659460 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:43:17.660800 systemd[1]: Starting user@500.service... Oct 2 19:43:17.663888 (systemd)[1202]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:17.723624 systemd[1202]: Queued start job for default target default.target. Oct 2 19:43:17.724127 systemd[1202]: Reached target paths.target. Oct 2 19:43:17.724145 systemd[1202]: Reached target sockets.target. Oct 2 19:43:17.724156 systemd[1202]: Reached target timers.target. Oct 2 19:43:17.724166 systemd[1202]: Reached target basic.target. Oct 2 19:43:17.724215 systemd[1202]: Reached target default.target. Oct 2 19:43:17.724243 systemd[1202]: Startup finished in 54ms. Oct 2 19:43:17.724428 systemd[1]: Started user@500.service. Oct 2 19:43:17.725434 systemd[1]: Started session-1.scope. Oct 2 19:43:17.778190 systemd[1]: Started sshd@1-10.0.0.11:22-10.0.0.1:46994.service. Oct 2 19:43:17.814283 sshd[1211]: Accepted publickey for core from 10.0.0.1 port 46994 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:17.815468 sshd[1211]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:17.818947 systemd-logind[1133]: New session 2 of user core. Oct 2 19:43:17.819904 systemd[1]: Started session-2.scope. Oct 2 19:43:17.879337 sshd[1211]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:17.882742 systemd[1]: Started sshd@2-10.0.0.11:22-10.0.0.1:47004.service. Oct 2 19:43:17.883155 systemd[1]: sshd@1-10.0.0.11:22-10.0.0.1:46994.service: Deactivated successfully. Oct 2 19:43:17.883917 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:43:17.884437 systemd-logind[1133]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:43:17.885366 systemd-logind[1133]: Removed session 2. Oct 2 19:43:17.921388 sshd[1216]: Accepted publickey for core from 10.0.0.1 port 47004 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:17.923025 sshd[1216]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:17.926550 systemd-logind[1133]: New session 3 of user core. Oct 2 19:43:17.927211 systemd[1]: Started session-3.scope. Oct 2 19:43:17.978321 sshd[1216]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:17.982838 systemd[1]: sshd@2-10.0.0.11:22-10.0.0.1:47004.service: Deactivated successfully. Oct 2 19:43:17.983990 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:43:17.985032 systemd-logind[1133]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:43:17.986991 systemd[1]: Started sshd@3-10.0.0.11:22-10.0.0.1:47006.service. Oct 2 19:43:17.991582 systemd-logind[1133]: Removed session 3. Oct 2 19:43:18.027767 sshd[1223]: Accepted publickey for core from 10.0.0.1 port 47006 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:18.029157 sshd[1223]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:18.033569 systemd[1]: Started session-4.scope. Oct 2 19:43:18.033893 systemd-logind[1133]: New session 4 of user core. Oct 2 19:43:18.089792 sshd[1223]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:18.093381 systemd[1]: sshd@3-10.0.0.11:22-10.0.0.1:47006.service: Deactivated successfully. Oct 2 19:43:18.093983 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:43:18.094587 systemd-logind[1133]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:43:18.095726 systemd[1]: Started sshd@4-10.0.0.11:22-10.0.0.1:47008.service. Oct 2 19:43:18.096401 systemd-logind[1133]: Removed session 4. Oct 2 19:43:18.136611 sshd[1229]: Accepted publickey for core from 10.0.0.1 port 47008 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:18.138273 sshd[1229]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:18.141635 systemd-logind[1133]: New session 5 of user core. Oct 2 19:43:18.142449 systemd[1]: Started session-5.scope. Oct 2 19:43:18.205099 sudo[1233]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:43:18.205295 sudo[1233]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:18.222643 dbus-daemon[1117]: avc: received setenforce notice (enforcing=1) Oct 2 19:43:18.223848 sudo[1233]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:18.225922 sshd[1229]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:18.229978 systemd[1]: Started sshd@5-10.0.0.11:22-10.0.0.1:47016.service. Oct 2 19:43:18.230449 systemd[1]: sshd@4-10.0.0.11:22-10.0.0.1:47008.service: Deactivated successfully. Oct 2 19:43:18.231158 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:43:18.231719 systemd-logind[1133]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:43:18.232401 systemd-logind[1133]: Removed session 5. Oct 2 19:43:18.266666 sshd[1236]: Accepted publickey for core from 10.0.0.1 port 47016 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:18.267964 sshd[1236]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:18.271216 systemd-logind[1133]: New session 6 of user core. Oct 2 19:43:18.272039 systemd[1]: Started session-6.scope. Oct 2 19:43:18.324811 sudo[1241]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:43:18.325022 sudo[1241]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:18.327923 sudo[1241]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:18.332752 sudo[1240]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:43:18.332944 sudo[1240]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:18.342224 systemd[1]: Stopping audit-rules.service... Oct 2 19:43:18.342000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:43:18.343784 auditctl[1244]: No rules Oct 2 19:43:18.344051 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:43:18.344206 systemd[1]: Stopped audit-rules.service. Oct 2 19:43:18.345998 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:43:18.346067 kernel: audit: type=1305 audit(1696275798.342:169): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:43:18.346088 kernel: audit: type=1300 audit(1696275798.342:169): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc1ffc7b0 a2=420 a3=0 items=0 ppid=1 pid=1244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:18.342000 audit[1244]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc1ffc7b0 a2=420 a3=0 items=0 ppid=1 pid=1244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:18.345652 systemd[1]: Starting audit-rules.service... Oct 2 19:43:18.342000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:43:18.350525 kernel: audit: type=1327 audit(1696275798.342:169): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:43:18.350570 kernel: audit: type=1131 audit(1696275798.342:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.364382 augenrules[1261]: No rules Oct 2 19:43:18.365113 systemd[1]: Finished audit-rules.service. Oct 2 19:43:18.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.367974 sudo[1240]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:18.368518 kernel: audit: type=1130 audit(1696275798.363:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.366000 audit[1240]: USER_END pid=1240 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.366000 audit[1240]: CRED_DISP pid=1240 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.372995 kernel: audit: type=1106 audit(1696275798.366:172): pid=1240 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.373070 kernel: audit: type=1104 audit(1696275798.366:173): pid=1240 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.372847 sshd[1236]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:18.375867 systemd[1]: Started sshd@6-10.0.0.11:22-10.0.0.1:47020.service. Oct 2 19:43:18.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:47020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.378126 systemd[1]: sshd@5-10.0.0.11:22-10.0.0.1:47016.service: Deactivated successfully. Oct 2 19:43:18.378729 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:43:18.379312 kernel: audit: type=1130 audit(1696275798.374:174): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:47020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.374000 audit[1236]: USER_END pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.383157 kernel: audit: type=1106 audit(1696275798.374:175): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.383289 systemd-logind[1133]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:43:18.374000 audit[1236]: CRED_DISP pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.386220 kernel: audit: type=1104 audit(1696275798.374:176): pid=1236 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.11:22-10.0.0.1:47016 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.386917 systemd-logind[1133]: Removed session 6. Oct 2 19:43:18.413000 audit[1266]: USER_ACCT pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.415598 sshd[1266]: Accepted publickey for core from 10.0.0.1 port 47020 ssh2: RSA SHA256:2lB5IF9sS6KpIpFExr0lw+xbST4N8bo2+5EMLLOpcG8 Oct 2 19:43:18.415000 audit[1266]: CRED_ACQ pid=1266 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.415000 audit[1266]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffe9dc49a0 a2=3 a3=1 items=0 ppid=1 pid=1266 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:18.415000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:43:18.416913 sshd[1266]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:43:18.420172 systemd-logind[1133]: New session 7 of user core. Oct 2 19:43:18.421052 systemd[1]: Started session-7.scope. Oct 2 19:43:18.422000 audit[1266]: USER_START pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.424000 audit[1269]: CRED_ACQ pid=1269 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:18.472000 audit[1270]: USER_ACCT pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.474334 sudo[1270]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:43:18.472000 audit[1270]: CRED_REFR pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.474569 sudo[1270]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:43:18.474000 audit[1270]: USER_START pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:18.999902 systemd[1]: Reloading. Oct 2 19:43:19.050084 /usr/lib/systemd/system-generators/torcx-generator[1300]: time="2023-10-02T19:43:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:19.050396 /usr/lib/systemd/system-generators/torcx-generator[1300]: time="2023-10-02T19:43:19Z" level=info msg="torcx already run" Oct 2 19:43:19.117875 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:19.117894 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:19.137499 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.183000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit: BPF prog-id=37 op=LOAD Oct 2 19:43:19.184000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit: BPF prog-id=38 op=LOAD Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.184000 audit: BPF prog-id=39 op=LOAD Oct 2 19:43:19.184000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:43:19.184000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit: BPF prog-id=40 op=LOAD Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.186000 audit: BPF prog-id=41 op=LOAD Oct 2 19:43:19.186000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:43:19.186000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit: BPF prog-id=42 op=LOAD Oct 2 19:43:19.187000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit: BPF prog-id=43 op=LOAD Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.187000 audit: BPF prog-id=44 op=LOAD Oct 2 19:43:19.187000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:43:19.187000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.188000 audit: BPF prog-id=45 op=LOAD Oct 2 19:43:19.188000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit: BPF prog-id=46 op=LOAD Oct 2 19:43:19.189000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit: BPF prog-id=47 op=LOAD Oct 2 19:43:19.190000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit: BPF prog-id=48 op=LOAD Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit: BPF prog-id=49 op=LOAD Oct 2 19:43:19.190000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:43:19.190000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.190000 audit: BPF prog-id=50 op=LOAD Oct 2 19:43:19.190000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.191000 audit: BPF prog-id=51 op=LOAD Oct 2 19:43:19.191000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:43:19.199755 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:43:19.206309 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:43:19.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.207002 systemd[1]: Reached target network-online.target. Oct 2 19:43:19.208988 systemd[1]: Started kubelet.service. Oct 2 19:43:19.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.221582 systemd[1]: Starting coreos-metadata.service... Oct 2 19:43:19.230295 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:43:19.230482 systemd[1]: Finished coreos-metadata.service. Oct 2 19:43:19.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.229000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.466748 kubelet[1338]: E1002 19:43:19.466619 1338 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:43:19.469289 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:43:19.469419 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:43:19.468000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:43:19.592654 systemd[1]: Stopped kubelet.service. Oct 2 19:43:19.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.610356 systemd[1]: Reloading. Oct 2 19:43:19.655944 /usr/lib/systemd/system-generators/torcx-generator[1406]: time="2023-10-02T19:43:19Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:43:19.655972 /usr/lib/systemd/system-generators/torcx-generator[1406]: time="2023-10-02T19:43:19Z" level=info msg="torcx already run" Oct 2 19:43:19.781798 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:43:19.781818 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:43:19.798828 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit: BPF prog-id=52 op=LOAD Oct 2 19:43:19.841000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.841000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit: BPF prog-id=53 op=LOAD Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.842000 audit: BPF prog-id=54 op=LOAD Oct 2 19:43:19.842000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:43:19.842000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit: BPF prog-id=55 op=LOAD Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.843000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit: BPF prog-id=56 op=LOAD Oct 2 19:43:19.844000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:43:19.844000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit: BPF prog-id=57 op=LOAD Oct 2 19:43:19.844000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit: BPF prog-id=58 op=LOAD Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.844000 audit: BPF prog-id=59 op=LOAD Oct 2 19:43:19.844000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:43:19.844000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.845000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit: BPF prog-id=60 op=LOAD Oct 2 19:43:19.846000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.846000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit: BPF prog-id=61 op=LOAD Oct 2 19:43:19.847000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.847000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit: BPF prog-id=62 op=LOAD Oct 2 19:43:19.848000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit: BPF prog-id=63 op=LOAD Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit: BPF prog-id=64 op=LOAD Oct 2 19:43:19.848000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:43:19.848000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.848000 audit: BPF prog-id=65 op=LOAD Oct 2 19:43:19.848000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:19.849000 audit: BPF prog-id=66 op=LOAD Oct 2 19:43:19.849000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:43:19.863064 systemd[1]: Started kubelet.service. Oct 2 19:43:19.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:19.911037 kubelet[1444]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:43:19.911037 kubelet[1444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:43:19.911037 kubelet[1444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:43:19.911374 kubelet[1444]: I1002 19:43:19.911167 1444 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:43:19.912384 kubelet[1444]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:43:19.912384 kubelet[1444]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:43:19.912384 kubelet[1444]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:43:20.618512 kubelet[1444]: I1002 19:43:20.618479 1444 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:43:20.618512 kubelet[1444]: I1002 19:43:20.618513 1444 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:43:20.618766 kubelet[1444]: I1002 19:43:20.618732 1444 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:43:20.622372 kubelet[1444]: I1002 19:43:20.622281 1444 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:43:20.624479 kubelet[1444]: W1002 19:43:20.624454 1444 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:43:20.625156 kubelet[1444]: I1002 19:43:20.625137 1444 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:43:20.625410 kubelet[1444]: I1002 19:43:20.625390 1444 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:43:20.625458 kubelet[1444]: I1002 19:43:20.625453 1444 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:43:20.625633 kubelet[1444]: I1002 19:43:20.625614 1444 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:43:20.625633 kubelet[1444]: I1002 19:43:20.625628 1444 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:43:20.625720 kubelet[1444]: I1002 19:43:20.625710 1444 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:43:20.629602 kubelet[1444]: I1002 19:43:20.629582 1444 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:43:20.629694 kubelet[1444]: I1002 19:43:20.629685 1444 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:43:20.629772 kubelet[1444]: I1002 19:43:20.629762 1444 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:43:20.629832 kubelet[1444]: I1002 19:43:20.629823 1444 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:43:20.630816 kubelet[1444]: E1002 19:43:20.630721 1444 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:20.631854 kubelet[1444]: I1002 19:43:20.631827 1444 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:43:20.632556 kubelet[1444]: W1002 19:43:20.632530 1444 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:43:20.634476 kubelet[1444]: E1002 19:43:20.634439 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:20.635351 kubelet[1444]: I1002 19:43:20.635303 1444 server.go:1175] "Started kubelet" Oct 2 19:43:20.634000 audit[1444]: AVC avc: denied { mac_admin } for pid=1444 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:20.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:20.634000 audit[1444]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400082cc60 a1=4000136888 a2=400082cc30 a3=25 items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.634000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:20.634000 audit[1444]: AVC avc: denied { mac_admin } for pid=1444 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:20.634000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:20.634000 audit[1444]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000dae0e0 a1=40001368a0 a2=400082ccf0 a3=25 items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.634000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:20.636704 kubelet[1444]: I1002 19:43:20.636446 1444 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:43:20.636704 kubelet[1444]: I1002 19:43:20.636476 1444 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:43:20.636704 kubelet[1444]: E1002 19:43:20.636558 1444 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:43:20.636704 kubelet[1444]: E1002 19:43:20.636578 1444 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:43:20.636704 kubelet[1444]: I1002 19:43:20.636628 1444 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:43:20.637578 kubelet[1444]: I1002 19:43:20.637339 1444 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:43:20.638597 kubelet[1444]: I1002 19:43:20.638574 1444 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:43:20.638674 kubelet[1444]: I1002 19:43:20.638654 1444 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:43:20.639253 kubelet[1444]: I1002 19:43:20.639232 1444 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:43:20.639631 kubelet[1444]: E1002 19:43:20.639532 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da79f4da40", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 635267648, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 635267648, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.639820 kubelet[1444]: W1002 19:43:20.639756 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:20.639820 kubelet[1444]: E1002 19:43:20.639780 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:20.639820 kubelet[1444]: W1002 19:43:20.639806 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:20.639820 kubelet[1444]: E1002 19:43:20.639814 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:20.640015 kubelet[1444]: E1002 19:43:20.639996 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:20.641223 kubelet[1444]: E1002 19:43:20.640849 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7a08bbc8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 636570568, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 636570568, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.641223 kubelet[1444]: W1002 19:43:20.641026 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:20.641223 kubelet[1444]: E1002 19:43:20.641059 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:20.641453 kubelet[1444]: E1002 19:43:20.641098 1444 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:20.659033 kubelet[1444]: I1002 19:43:20.658990 1444 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:43:20.659033 kubelet[1444]: I1002 19:43:20.659008 1444 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:43:20.659163 kubelet[1444]: I1002 19:43:20.659045 1444 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:43:20.661107 kubelet[1444]: I1002 19:43:20.661079 1444 policy_none.go:49] "None policy: Start" Oct 2 19:43:20.661182 kubelet[1444]: E1002 19:43:20.660888 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.663195 kubelet[1444]: I1002 19:43:20.661633 1444 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:43:20.663195 kubelet[1444]: I1002 19:43:20.661658 1444 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:43:20.663195 kubelet[1444]: E1002 19:43:20.662009 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.663309 kubelet[1444]: E1002 19:43:20.662836 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.662000 audit[1461]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.662000 audit[1461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffffab11b90 a2=0 a3=1 items=0 ppid=1444 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.662000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:43:20.663000 audit[1466]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.663000 audit[1466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffecc4fc60 a2=0 a3=1 items=0 ppid=1444 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.663000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:43:20.666181 systemd[1]: Created slice kubepods.slice. Oct 2 19:43:20.670046 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:43:20.672641 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:43:20.684312 kubelet[1444]: I1002 19:43:20.684289 1444 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:43:20.682000 audit[1444]: AVC avc: denied { mac_admin } for pid=1444 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:20.682000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:43:20.682000 audit[1444]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000cba5d0 a1=4000ccc138 a2=4000cba5a0 a3=25 items=0 ppid=1 pid=1444 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.682000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:43:20.684614 kubelet[1444]: I1002 19:43:20.684375 1444 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:43:20.685176 kubelet[1444]: I1002 19:43:20.685158 1444 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:43:20.685807 kubelet[1444]: E1002 19:43:20.685792 1444 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.11\" not found" Oct 2 19:43:20.667000 audit[1468]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1468 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.667000 audit[1468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd7e89070 a2=0 a3=1 items=0 ppid=1444 pid=1468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.667000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:20.688418 kubelet[1444]: E1002 19:43:20.688342 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7cf87120", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 685834528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 685834528, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.687000 audit[1474]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1474 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.687000 audit[1474]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffeecf86d0 a2=0 a3=1 items=0 ppid=1444 pid=1474 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.687000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:43:20.716000 audit[1479]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.716000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=fffff7f9a9e0 a2=0 a3=1 items=0 ppid=1444 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.716000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:43:20.718000 audit[1480]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.718000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc1a43650 a2=0 a3=1 items=0 ppid=1444 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.718000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:43:20.723000 audit[1483]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1483 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.723000 audit[1483]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffe04c6180 a2=0 a3=1 items=0 ppid=1444 pid=1483 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.723000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:43:20.727000 audit[1486]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.727000 audit[1486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffc5199b30 a2=0 a3=1 items=0 ppid=1444 pid=1486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.727000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:43:20.728000 audit[1487]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1487 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.728000 audit[1487]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9d95910 a2=0 a3=1 items=0 ppid=1444 pid=1487 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:43:20.729000 audit[1488]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.729000 audit[1488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9e4ee30 a2=0 a3=1 items=0 ppid=1444 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.729000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:43:20.731000 audit[1490]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.731000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffc63c63d0 a2=0 a3=1 items=0 ppid=1444 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.731000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:43:20.739471 kubelet[1444]: E1002 19:43:20.739440 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:20.740092 kubelet[1444]: I1002 19:43:20.740065 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:20.741573 kubelet[1444]: E1002 19:43:20.741546 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:20.741610 kubelet[1444]: E1002 19:43:20.741540 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 740024968, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.742468 kubelet[1444]: E1002 19:43:20.742403 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 740034768, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.743332 kubelet[1444]: E1002 19:43:20.743260 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 740038048, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.733000 audit[1492]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.733000 audit[1492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffc4b2d610 a2=0 a3=1 items=0 ppid=1444 pid=1492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:43:20.757000 audit[1495]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.757000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=fffffd10fe90 a2=0 a3=1 items=0 ppid=1444 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:43:20.759000 audit[1497]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.759000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=fffff0e543d0 a2=0 a3=1 items=0 ppid=1444 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.759000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:43:20.798000 audit[1500]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1500 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.798000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffc5e44780 a2=0 a3=1 items=0 ppid=1444 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.798000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:43:20.800511 kubelet[1444]: I1002 19:43:20.800482 1444 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:43:20.799000 audit[1501]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.799000 audit[1501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc0415360 a2=0 a3=1 items=0 ppid=1444 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:43:20.800000 audit[1502]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1502 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.800000 audit[1502]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe5b2ec10 a2=0 a3=1 items=0 ppid=1444 pid=1502 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.800000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:43:20.801000 audit[1503]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.801000 audit[1503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffee07f4d0 a2=0 a3=1 items=0 ppid=1444 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.801000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:43:20.801000 audit[1504]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.801000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff08b6540 a2=0 a3=1 items=0 ppid=1444 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.801000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:43:20.802000 audit[1506]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1506 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:20.802000 audit[1506]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff8902b50 a2=0 a3=1 items=0 ppid=1444 pid=1506 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.802000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:43:20.803000 audit[1507]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.803000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdec65cf0 a2=0 a3=1 items=0 ppid=1444 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.803000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:43:20.805000 audit[1508]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1508 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.805000 audit[1508]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffdb3aa7a0 a2=0 a3=1 items=0 ppid=1444 pid=1508 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:43:20.807000 audit[1510]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1510 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.807000 audit[1510]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffcfefab30 a2=0 a3=1 items=0 ppid=1444 pid=1510 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.807000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:43:20.808000 audit[1511]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.808000 audit[1511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd534c060 a2=0 a3=1 items=0 ppid=1444 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.808000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:43:20.809000 audit[1512]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1512 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.809000 audit[1512]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6df9180 a2=0 a3=1 items=0 ppid=1444 pid=1512 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:43:20.811000 audit[1514]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1514 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.811000 audit[1514]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdc98cfc0 a2=0 a3=1 items=0 ppid=1444 pid=1514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.811000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:43:20.814000 audit[1516]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.814000 audit[1516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffff01d84c0 a2=0 a3=1 items=0 ppid=1444 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.814000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:43:20.816000 audit[1518]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.816000 audit[1518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd6c91010 a2=0 a3=1 items=0 ppid=1444 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:43:20.818000 audit[1520]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1520 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.818000 audit[1520]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe1954510 a2=0 a3=1 items=0 ppid=1444 pid=1520 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.818000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:43:20.821000 audit[1522]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1522 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.821000 audit[1522]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffcb3aee90 a2=0 a3=1 items=0 ppid=1444 pid=1522 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.821000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:43:20.823704 kubelet[1444]: I1002 19:43:20.823675 1444 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:43:20.823789 kubelet[1444]: I1002 19:43:20.823775 1444 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:43:20.823816 kubelet[1444]: I1002 19:43:20.823797 1444 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:43:20.823857 kubelet[1444]: E1002 19:43:20.823847 1444 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:43:20.823000 audit[1523]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1523 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.823000 audit[1523]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcadc9140 a2=0 a3=1 items=0 ppid=1444 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.823000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:43:20.825887 kubelet[1444]: W1002 19:43:20.825858 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:20.825946 kubelet[1444]: E1002 19:43:20.825897 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:20.824000 audit[1524]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1524 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.824000 audit[1524]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffedf6e9d0 a2=0 a3=1 items=0 ppid=1444 pid=1524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.824000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:43:20.825000 audit[1525]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1525 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:20.825000 audit[1525]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc3f7def0 a2=0 a3=1 items=0 ppid=1444 pid=1525 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:20.825000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:43:20.839590 kubelet[1444]: E1002 19:43:20.839564 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:20.842922 kubelet[1444]: E1002 19:43:20.842898 1444 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:20.940345 kubelet[1444]: E1002 19:43:20.940233 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:20.943171 kubelet[1444]: I1002 19:43:20.943143 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:20.944362 kubelet[1444]: E1002 19:43:20.944281 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 943112128, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:20.944721 kubelet[1444]: E1002 19:43:20.944519 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:20.945569 kubelet[1444]: E1002 19:43:20.945426 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 943116688, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:21.038296 kubelet[1444]: E1002 19:43:21.038198 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 943119568, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:21.040341 kubelet[1444]: E1002 19:43:21.040317 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.140870 kubelet[1444]: E1002 19:43:21.140835 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.241358 kubelet[1444]: E1002 19:43:21.241258 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.244417 kubelet[1444]: E1002 19:43:21.244393 1444 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:21.341750 kubelet[1444]: E1002 19:43:21.341713 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.345374 kubelet[1444]: I1002 19:43:21.345352 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:21.346720 kubelet[1444]: E1002 19:43:21.346682 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:21.346987 kubelet[1444]: E1002 19:43:21.346912 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 21, 345287688, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:21.438834 kubelet[1444]: E1002 19:43:21.438732 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 21, 345299448, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:21.441900 kubelet[1444]: E1002 19:43:21.441877 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.542318 kubelet[1444]: E1002 19:43:21.542290 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.634680 kubelet[1444]: E1002 19:43:21.634649 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:21.638097 kubelet[1444]: E1002 19:43:21.638007 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 21, 345304248, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:21.643110 kubelet[1444]: E1002 19:43:21.643090 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.743791 kubelet[1444]: E1002 19:43:21.743758 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.759260 kubelet[1444]: W1002 19:43:21.759227 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:21.759326 kubelet[1444]: E1002 19:43:21.759271 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:21.772516 kubelet[1444]: W1002 19:43:21.772473 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:21.772627 kubelet[1444]: E1002 19:43:21.772615 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:21.844833 kubelet[1444]: E1002 19:43:21.844699 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:21.881046 kubelet[1444]: W1002 19:43:21.881018 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:21.881175 kubelet[1444]: E1002 19:43:21.881165 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:21.902275 kubelet[1444]: W1002 19:43:21.902240 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:21.902275 kubelet[1444]: E1002 19:43:21.902269 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:21.945577 kubelet[1444]: E1002 19:43:21.945542 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.045933 kubelet[1444]: E1002 19:43:22.045899 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.046151 kubelet[1444]: E1002 19:43:22.046128 1444 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:22.146539 kubelet[1444]: E1002 19:43:22.146436 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.148215 kubelet[1444]: I1002 19:43:22.148196 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:22.149464 kubelet[1444]: E1002 19:43:22.149441 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:22.149559 kubelet[1444]: E1002 19:43:22.149458 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 22, 148150568, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:22.150425 kubelet[1444]: E1002 19:43:22.150375 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 22, 148162008, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:22.238855 kubelet[1444]: E1002 19:43:22.238768 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 22, 148165048, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:22.246933 kubelet[1444]: E1002 19:43:22.246900 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.347328 kubelet[1444]: E1002 19:43:22.347301 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.447759 kubelet[1444]: E1002 19:43:22.447654 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.547997 kubelet[1444]: E1002 19:43:22.547960 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.635417 kubelet[1444]: E1002 19:43:22.635380 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:22.648958 kubelet[1444]: E1002 19:43:22.648923 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.749551 kubelet[1444]: E1002 19:43:22.749490 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.849970 kubelet[1444]: E1002 19:43:22.849920 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:22.950376 kubelet[1444]: E1002 19:43:22.950324 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.050871 kubelet[1444]: E1002 19:43:23.050757 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.151377 kubelet[1444]: E1002 19:43:23.151334 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.251851 kubelet[1444]: E1002 19:43:23.251796 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.352328 kubelet[1444]: E1002 19:43:23.352210 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.424681 kubelet[1444]: W1002 19:43:23.424638 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:23.424681 kubelet[1444]: E1002 19:43:23.424670 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:23.452938 kubelet[1444]: E1002 19:43:23.452915 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.490927 kubelet[1444]: W1002 19:43:23.490901 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:23.491070 kubelet[1444]: E1002 19:43:23.491058 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:23.553624 kubelet[1444]: E1002 19:43:23.553587 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.636122 kubelet[1444]: E1002 19:43:23.636020 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:23.647715 kubelet[1444]: E1002 19:43:23.647675 1444 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:23.653803 kubelet[1444]: E1002 19:43:23.653777 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.750916 kubelet[1444]: I1002 19:43:23.750887 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:23.752084 kubelet[1444]: E1002 19:43:23.752064 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:23.752174 kubelet[1444]: E1002 19:43:23.752058 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 23, 750803488, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:23.753145 kubelet[1444]: E1002 19:43:23.753095 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 23, 750807968, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:23.754071 kubelet[1444]: E1002 19:43:23.754055 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.754148 kubelet[1444]: E1002 19:43:23.754032 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 23, 750814048, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:23.854947 kubelet[1444]: E1002 19:43:23.854870 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:23.955390 kubelet[1444]: E1002 19:43:23.955287 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.055781 kubelet[1444]: E1002 19:43:24.055739 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.154660 kubelet[1444]: W1002 19:43:24.154607 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:24.154660 kubelet[1444]: E1002 19:43:24.154637 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:24.156752 kubelet[1444]: E1002 19:43:24.156727 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.257151 kubelet[1444]: E1002 19:43:24.257116 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.320547 kubelet[1444]: W1002 19:43:24.320493 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:24.320547 kubelet[1444]: E1002 19:43:24.320538 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:24.357834 kubelet[1444]: E1002 19:43:24.357795 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.458214 kubelet[1444]: E1002 19:43:24.458184 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.558688 kubelet[1444]: E1002 19:43:24.558589 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.637073 kubelet[1444]: E1002 19:43:24.637021 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:24.659640 kubelet[1444]: E1002 19:43:24.659604 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.760173 kubelet[1444]: E1002 19:43:24.760126 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.860623 kubelet[1444]: E1002 19:43:24.860530 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:24.961128 kubelet[1444]: E1002 19:43:24.961080 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.061461 kubelet[1444]: E1002 19:43:25.061429 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.162023 kubelet[1444]: E1002 19:43:25.161913 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.262348 kubelet[1444]: E1002 19:43:25.262309 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.365220 kubelet[1444]: E1002 19:43:25.362621 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.463065 kubelet[1444]: E1002 19:43:25.462958 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.563429 kubelet[1444]: E1002 19:43:25.563390 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.637859 kubelet[1444]: E1002 19:43:25.637828 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:25.664449 kubelet[1444]: E1002 19:43:25.664418 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.686008 kubelet[1444]: E1002 19:43:25.685956 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:25.765559 kubelet[1444]: E1002 19:43:25.765522 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.866090 kubelet[1444]: E1002 19:43:25.866050 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:25.966539 kubelet[1444]: E1002 19:43:25.966490 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.066981 kubelet[1444]: E1002 19:43:26.066873 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.167494 kubelet[1444]: E1002 19:43:26.167445 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.267942 kubelet[1444]: E1002 19:43:26.267905 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.368511 kubelet[1444]: E1002 19:43:26.368389 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.469095 kubelet[1444]: E1002 19:43:26.469047 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.569691 kubelet[1444]: E1002 19:43:26.569582 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.638173 kubelet[1444]: E1002 19:43:26.638068 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:26.670515 kubelet[1444]: E1002 19:43:26.670449 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.770778 kubelet[1444]: E1002 19:43:26.770726 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.849902 kubelet[1444]: E1002 19:43:26.849859 1444 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.11" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:43:26.871007 kubelet[1444]: E1002 19:43:26.870967 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:26.922592 kubelet[1444]: W1002 19:43:26.922478 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:26.922745 kubelet[1444]: E1002 19:43:26.922734 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:43:26.954229 kubelet[1444]: I1002 19:43:26.954203 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:26.955429 kubelet[1444]: E1002 19:43:26.955410 1444 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.11" Oct 2 19:43:26.955821 kubelet[1444]: E1002 19:43:26.955745 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54dd18", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.11 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658337048, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 26, 954168048, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54dd18" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:26.956758 kubelet[1444]: E1002 19:43:26.956618 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54f280", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.11 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658342528, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 26, 954172888, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54f280" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:26.957469 kubelet[1444]: E1002 19:43:26.957414 1444 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.11.178a61da7b54fbb8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.11", UID:"10.0.0.11", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.11 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.11"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 43, 20, 658344888, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 43, 26, 954175728, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.11.178a61da7b54fbb8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:43:26.974147 kubelet[1444]: E1002 19:43:26.974112 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.074409 kubelet[1444]: E1002 19:43:27.074366 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.175490 kubelet[1444]: E1002 19:43:27.175379 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.276073 kubelet[1444]: E1002 19:43:27.276027 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.376780 kubelet[1444]: E1002 19:43:27.376730 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.477259 kubelet[1444]: E1002 19:43:27.477141 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.579013 kubelet[1444]: E1002 19:43:27.577779 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.640030 kubelet[1444]: E1002 19:43:27.639195 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:27.678417 kubelet[1444]: E1002 19:43:27.678383 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.779027 kubelet[1444]: E1002 19:43:27.778995 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.809757 kubelet[1444]: W1002 19:43:27.809727 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:27.809757 kubelet[1444]: E1002 19:43:27.809760 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:43:27.879333 kubelet[1444]: E1002 19:43:27.879287 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:27.979923 kubelet[1444]: E1002 19:43:27.979875 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.080638 kubelet[1444]: E1002 19:43:28.080532 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.142374 kubelet[1444]: W1002 19:43:28.142335 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:28.142374 kubelet[1444]: E1002 19:43:28.142369 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.11" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:43:28.181185 kubelet[1444]: E1002 19:43:28.181133 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.281776 kubelet[1444]: E1002 19:43:28.281724 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.383217 kubelet[1444]: E1002 19:43:28.383096 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.485056 kubelet[1444]: E1002 19:43:28.485001 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.585674 kubelet[1444]: E1002 19:43:28.585620 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.640549 kubelet[1444]: E1002 19:43:28.640287 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:28.686789 kubelet[1444]: E1002 19:43:28.686723 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.787876 kubelet[1444]: E1002 19:43:28.787820 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.888608 kubelet[1444]: E1002 19:43:28.888559 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:28.989380 kubelet[1444]: E1002 19:43:28.989097 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.089932 kubelet[1444]: E1002 19:43:29.089885 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.190589 kubelet[1444]: E1002 19:43:29.190532 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.291233 kubelet[1444]: E1002 19:43:29.291170 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.391750 kubelet[1444]: E1002 19:43:29.391700 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.492354 kubelet[1444]: E1002 19:43:29.492307 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.597949 kubelet[1444]: E1002 19:43:29.597678 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.613946 kubelet[1444]: W1002 19:43:29.613910 1444 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:29.613946 kubelet[1444]: E1002 19:43:29.613948 1444 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:43:29.640534 kubelet[1444]: E1002 19:43:29.640487 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:29.698276 kubelet[1444]: E1002 19:43:29.698226 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.799334 kubelet[1444]: E1002 19:43:29.799271 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:29.900417 kubelet[1444]: E1002 19:43:29.900076 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.001159 kubelet[1444]: E1002 19:43:30.001090 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.101776 kubelet[1444]: E1002 19:43:30.101718 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.202867 kubelet[1444]: E1002 19:43:30.202584 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.303388 kubelet[1444]: E1002 19:43:30.303325 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.403928 kubelet[1444]: E1002 19:43:30.403868 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.505890 kubelet[1444]: E1002 19:43:30.504489 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.604763 kubelet[1444]: E1002 19:43:30.604723 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.620910 kubelet[1444]: I1002 19:43:30.620874 1444 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:43:30.640682 kubelet[1444]: E1002 19:43:30.640647 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:30.686875 kubelet[1444]: E1002 19:43:30.686834 1444 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.11\" not found" Oct 2 19:43:30.688107 kubelet[1444]: E1002 19:43:30.688073 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:30.705944 kubelet[1444]: E1002 19:43:30.705904 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.806551 kubelet[1444]: E1002 19:43:30.806197 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:30.907169 kubelet[1444]: E1002 19:43:30.907123 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.012071 kubelet[1444]: E1002 19:43:31.012015 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.020103 kubelet[1444]: E1002 19:43:31.020064 1444 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.11" not found Oct 2 19:43:31.113313 kubelet[1444]: E1002 19:43:31.113048 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.213749 kubelet[1444]: E1002 19:43:31.213711 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.314486 kubelet[1444]: E1002 19:43:31.314446 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.415365 kubelet[1444]: E1002 19:43:31.415114 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.516103 kubelet[1444]: E1002 19:43:31.516062 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.618920 kubelet[1444]: E1002 19:43:31.617471 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.641162 kubelet[1444]: E1002 19:43:31.641117 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:31.718611 kubelet[1444]: E1002 19:43:31.718310 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.818994 kubelet[1444]: E1002 19:43:31.818950 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:31.919589 kubelet[1444]: E1002 19:43:31.919545 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.020190 kubelet[1444]: E1002 19:43:32.020147 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.042947 kubelet[1444]: E1002 19:43:32.042907 1444 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.11" not found Oct 2 19:43:32.121203 kubelet[1444]: E1002 19:43:32.121161 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.222044 kubelet[1444]: E1002 19:43:32.221991 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.323352 kubelet[1444]: E1002 19:43:32.323055 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.423775 kubelet[1444]: E1002 19:43:32.423735 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.524116 kubelet[1444]: E1002 19:43:32.524073 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.625248 kubelet[1444]: E1002 19:43:32.624983 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.641913 kubelet[1444]: E1002 19:43:32.641870 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:32.725613 kubelet[1444]: E1002 19:43:32.725571 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.826415 kubelet[1444]: E1002 19:43:32.826373 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:32.926959 kubelet[1444]: E1002 19:43:32.926689 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.027671 kubelet[1444]: E1002 19:43:33.027618 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.128218 kubelet[1444]: E1002 19:43:33.128172 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.228957 kubelet[1444]: E1002 19:43:33.228704 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.255198 kubelet[1444]: E1002 19:43:33.255157 1444 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.11\" not found" node="10.0.0.11" Oct 2 19:43:33.329597 kubelet[1444]: E1002 19:43:33.329547 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.356436 kubelet[1444]: I1002 19:43:33.356400 1444 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.11" Oct 2 19:43:33.430079 kubelet[1444]: E1002 19:43:33.430033 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.445393 kubelet[1444]: I1002 19:43:33.445356 1444 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.11" Oct 2 19:43:33.530185 kubelet[1444]: E1002 19:43:33.530154 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.630836 kubelet[1444]: E1002 19:43:33.630803 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.642297 kubelet[1444]: E1002 19:43:33.642273 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:33.732128 kubelet[1444]: E1002 19:43:33.732092 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.833001 kubelet[1444]: E1002 19:43:33.832731 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:33.882827 sudo[1270]: pam_unix(sudo:session): session closed for user root Oct 2 19:43:33.881000 audit[1270]: USER_END pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.883587 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:43:33.883642 kernel: audit: type=1106 audit(1696275813.881:574): pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.881000 audit[1270]: CRED_DISP pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.888031 sshd[1266]: pam_unix(sshd:session): session closed for user core Oct 2 19:43:33.889167 kernel: audit: type=1104 audit(1696275813.881:575): pid=1270 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.888000 audit[1266]: USER_END pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:33.893750 systemd[1]: sshd@6-10.0.0.11:22-10.0.0.1:47020.service: Deactivated successfully. Oct 2 19:43:33.888000 audit[1266]: CRED_DISP pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:33.894552 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:43:33.895308 systemd-logind[1133]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:43:33.896034 systemd-logind[1133]: Removed session 7. Oct 2 19:43:33.896759 kernel: audit: type=1106 audit(1696275813.888:576): pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:33.896825 kernel: audit: type=1104 audit(1696275813.888:577): pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:43:33.896849 kernel: audit: type=1131 audit(1696275813.892:578): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:47020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.11:22-10.0.0.1:47020 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:43:33.933856 kubelet[1444]: E1002 19:43:33.933792 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.034892 kubelet[1444]: E1002 19:43:34.034797 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.135831 kubelet[1444]: E1002 19:43:34.135292 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.236299 kubelet[1444]: E1002 19:43:34.236229 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.336830 kubelet[1444]: E1002 19:43:34.336771 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.437633 kubelet[1444]: E1002 19:43:34.437340 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.537568 kubelet[1444]: E1002 19:43:34.537499 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.638251 kubelet[1444]: E1002 19:43:34.638184 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.642610 kubelet[1444]: E1002 19:43:34.642578 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:34.739374 kubelet[1444]: E1002 19:43:34.739085 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.839700 kubelet[1444]: E1002 19:43:34.839630 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:34.942989 kubelet[1444]: E1002 19:43:34.942917 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.043565 kubelet[1444]: E1002 19:43:35.043488 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.144361 kubelet[1444]: E1002 19:43:35.144303 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.244956 kubelet[1444]: E1002 19:43:35.244900 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.345782 kubelet[1444]: E1002 19:43:35.345497 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.446161 kubelet[1444]: E1002 19:43:35.446105 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.547056 kubelet[1444]: E1002 19:43:35.546990 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.642973 kubelet[1444]: E1002 19:43:35.642659 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:35.647842 kubelet[1444]: E1002 19:43:35.647813 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.689620 kubelet[1444]: E1002 19:43:35.689567 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:35.748422 kubelet[1444]: E1002 19:43:35.748352 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.848912 kubelet[1444]: E1002 19:43:35.848870 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:35.949856 kubelet[1444]: E1002 19:43:35.949528 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.050395 kubelet[1444]: E1002 19:43:36.050329 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.151216 kubelet[1444]: E1002 19:43:36.151147 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.251814 kubelet[1444]: E1002 19:43:36.251779 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.352450 kubelet[1444]: E1002 19:43:36.352389 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.453082 kubelet[1444]: E1002 19:43:36.453025 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.553915 kubelet[1444]: E1002 19:43:36.553624 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.643523 kubelet[1444]: E1002 19:43:36.643454 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:36.654797 kubelet[1444]: E1002 19:43:36.654607 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.755511 kubelet[1444]: E1002 19:43:36.755439 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.856487 kubelet[1444]: E1002 19:43:36.856177 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:36.956826 kubelet[1444]: E1002 19:43:36.956763 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:37.057421 kubelet[1444]: E1002 19:43:37.057354 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:37.158284 kubelet[1444]: E1002 19:43:37.158066 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:37.258713 kubelet[1444]: E1002 19:43:37.258665 1444 kubelet.go:2448] "Error getting node" err="node \"10.0.0.11\" not found" Oct 2 19:43:37.359389 kubelet[1444]: I1002 19:43:37.359346 1444 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:43:37.359821 env[1145]: time="2023-10-02T19:43:37.359772128Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:43:37.360584 kubelet[1444]: I1002 19:43:37.359980 1444 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:43:37.360584 kubelet[1444]: E1002 19:43:37.360266 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:37.642975 kubelet[1444]: I1002 19:43:37.642914 1444 apiserver.go:52] "Watching apiserver" Oct 2 19:43:37.644024 kubelet[1444]: E1002 19:43:37.643992 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:37.646029 kubelet[1444]: I1002 19:43:37.646001 1444 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:43:37.646127 kubelet[1444]: I1002 19:43:37.646111 1444 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:43:37.651335 systemd[1]: Created slice kubepods-besteffort-podef916c82_f364_4f63_acbb_584fcd1ddb39.slice. Oct 2 19:43:37.663159 systemd[1]: Created slice kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice. Oct 2 19:43:37.842255 kubelet[1444]: I1002 19:43:37.842205 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hubble-tls\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842255 kubelet[1444]: I1002 19:43:37.842250 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-run\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842271 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-etc-cni-netd\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842294 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-config-path\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842331 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-kernel\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842351 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-cgroup\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842370 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cni-path\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842405 kubelet[1444]: I1002 19:43:37.842391 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrv5n\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-kube-api-access-mrv5n\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842413 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ef916c82-f364-4f63-acbb-584fcd1ddb39-xtables-lock\") pod \"kube-proxy-8szh8\" (UID: \"ef916c82-f364-4f63-acbb-584fcd1ddb39\") " pod="kube-system/kube-proxy-8szh8" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842433 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8b4tf\" (UniqueName: \"kubernetes.io/projected/ef916c82-f364-4f63-acbb-584fcd1ddb39-kube-api-access-8b4tf\") pod \"kube-proxy-8szh8\" (UID: \"ef916c82-f364-4f63-acbb-584fcd1ddb39\") " pod="kube-system/kube-proxy-8szh8" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842454 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-bpf-maps\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842471 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hostproc\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842496 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-clustermesh-secrets\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842585 kubelet[1444]: I1002 19:43:37.842530 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ef916c82-f364-4f63-acbb-584fcd1ddb39-kube-proxy\") pod \"kube-proxy-8szh8\" (UID: \"ef916c82-f364-4f63-acbb-584fcd1ddb39\") " pod="kube-system/kube-proxy-8szh8" Oct 2 19:43:37.842709 kubelet[1444]: I1002 19:43:37.842548 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-lib-modules\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842709 kubelet[1444]: I1002 19:43:37.842567 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-xtables-lock\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842709 kubelet[1444]: I1002 19:43:37.842585 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-net\") pod \"cilium-ndmgh\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " pod="kube-system/cilium-ndmgh" Oct 2 19:43:37.842709 kubelet[1444]: I1002 19:43:37.842610 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ef916c82-f364-4f63-acbb-584fcd1ddb39-lib-modules\") pod \"kube-proxy-8szh8\" (UID: \"ef916c82-f364-4f63-acbb-584fcd1ddb39\") " pod="kube-system/kube-proxy-8szh8" Oct 2 19:43:37.842709 kubelet[1444]: I1002 19:43:37.842618 1444 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:43:37.962613 kubelet[1444]: E1002 19:43:37.962001 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:37.963336 env[1145]: time="2023-10-02T19:43:37.963288048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8szh8,Uid:ef916c82-f364-4f63-acbb-584fcd1ddb39,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:38.274161 kubelet[1444]: E1002 19:43:38.274131 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:38.275060 env[1145]: time="2023-10-02T19:43:38.274979048Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndmgh,Uid:50f8cf99-6fc8-4911-8d52-f292c9c5ec4c,Namespace:kube-system,Attempt:0,}" Oct 2 19:43:38.644438 kubelet[1444]: E1002 19:43:38.644120 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:38.656362 env[1145]: time="2023-10-02T19:43:38.656319408Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.679830 env[1145]: time="2023-10-02T19:43:38.679747128Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.681879 env[1145]: time="2023-10-02T19:43:38.681761848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.684962 env[1145]: time="2023-10-02T19:43:38.684921688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.687197 env[1145]: time="2023-10-02T19:43:38.686260488Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.687970 env[1145]: time="2023-10-02T19:43:38.687819848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.715834 env[1145]: time="2023-10-02T19:43:38.715775608Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.717897 env[1145]: time="2023-10-02T19:43:38.717733688Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:38.755214 env[1145]: time="2023-10-02T19:43:38.754886688Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:38.755214 env[1145]: time="2023-10-02T19:43:38.754932008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:38.755214 env[1145]: time="2023-10-02T19:43:38.754941888Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:38.755401 env[1145]: time="2023-10-02T19:43:38.755226768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f pid=1551 runtime=io.containerd.runc.v2 Oct 2 19:43:38.758415 env[1145]: time="2023-10-02T19:43:38.758269808Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:43:38.758415 env[1145]: time="2023-10-02T19:43:38.758336128Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:43:38.758544 env[1145]: time="2023-10-02T19:43:38.758353328Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:43:38.759058 env[1145]: time="2023-10-02T19:43:38.758990408Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0100a2e264d4878120ffcb8fa965531cf21ab6cd8140f34afaf55aa76cede5f1 pid=1545 runtime=io.containerd.runc.v2 Oct 2 19:43:38.780547 systemd[1]: Started cri-containerd-0100a2e264d4878120ffcb8fa965531cf21ab6cd8140f34afaf55aa76cede5f1.scope. Oct 2 19:43:38.787555 systemd[1]: Started cri-containerd-244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f.scope. Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.811702 kernel: audit: type=1400 audit(1696275818.805:579): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.811789 kernel: audit: type=1400 audit(1696275818.805:580): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.811809 kernel: audit: type=1400 audit(1696275818.805:581): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.816367 kernel: audit: type=1400 audit(1696275818.805:582): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.816426 kernel: audit: type=1400 audit(1696275818.805:583): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.805000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit: BPF prog-id=67 op=LOAD Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001bdb38 a2=10 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061326532363464343837383132306666636238666139363535 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001bd5a0 a2=3c a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061326532363464343837383132306666636238666139363535 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.808000 audit: BPF prog-id=68 op=LOAD Oct 2 19:43:38.808000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bd8e0 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.808000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061326532363464343837383132306666636238666139363535 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.810000 audit: BPF prog-id=69 op=LOAD Oct 2 19:43:38.810000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001bd670 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.810000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061326532363464343837383132306666636238666139363535 Oct 2 19:43:38.813000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:43:38.813000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { perfmon } for pid=1568 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit[1568]: AVC avc: denied { bpf } for pid=1568 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.813000 audit: BPF prog-id=70 op=LOAD Oct 2 19:43:38.813000 audit[1568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001bdb40 a2=78 a3=0 items=0 ppid=1545 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031303061326532363464343837383132306666636238666139363535 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.821000 audit: BPF prog-id=71 op=LOAD Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=1551 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234346637313837333035303431373736343737626532633133653865 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1551 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234346637313837333035303431373736343737626532633133653865 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit: BPF prog-id=72 op=LOAD Oct 2 19:43:38.824000 audit[1564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=1551 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234346637313837333035303431373736343737626532633133653865 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit: BPF prog-id=73 op=LOAD Oct 2 19:43:38.824000 audit[1564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=1551 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234346637313837333035303431373736343737626532633133653865 Oct 2 19:43:38.824000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:43:38.824000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { perfmon } for pid=1564 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit[1564]: AVC avc: denied { bpf } for pid=1564 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:38.824000 audit: BPF prog-id=74 op=LOAD Oct 2 19:43:38.824000 audit[1564]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=1551 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:38.824000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234346637313837333035303431373736343737626532633133653865 Oct 2 19:43:38.837023 env[1145]: time="2023-10-02T19:43:38.836970088Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-8szh8,Uid:ef916c82-f364-4f63-acbb-584fcd1ddb39,Namespace:kube-system,Attempt:0,} returns sandbox id \"0100a2e264d4878120ffcb8fa965531cf21ab6cd8140f34afaf55aa76cede5f1\"" Oct 2 19:43:38.837678 env[1145]: time="2023-10-02T19:43:38.837649568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-ndmgh,Uid:50f8cf99-6fc8-4911-8d52-f292c9c5ec4c,Namespace:kube-system,Attempt:0,} returns sandbox id \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\"" Oct 2 19:43:38.838260 kubelet[1444]: E1002 19:43:38.838217 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:38.839323 kubelet[1444]: E1002 19:43:38.839080 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:38.840162 env[1145]: time="2023-10-02T19:43:38.840136608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:43:38.951727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2978189372.mount: Deactivated successfully. Oct 2 19:43:39.645149 kubelet[1444]: E1002 19:43:39.645104 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.030277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2661229787.mount: Deactivated successfully. Oct 2 19:43:40.370451 env[1145]: time="2023-10-02T19:43:40.370230568Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:40.371962 env[1145]: time="2023-10-02T19:43:40.371917608Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:40.373144 env[1145]: time="2023-10-02T19:43:40.373114008Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:40.375012 env[1145]: time="2023-10-02T19:43:40.374983368Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:40.375426 env[1145]: time="2023-10-02T19:43:40.375395168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:36ad84e6a838b02d80a9db87b13c83185253f647e2af2f58f91ac1346103ff4e\"" Oct 2 19:43:40.376563 env[1145]: time="2023-10-02T19:43:40.376536928Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:43:40.377458 env[1145]: time="2023-10-02T19:43:40.377339328Z" level=info msg="CreateContainer within sandbox \"0100a2e264d4878120ffcb8fa965531cf21ab6cd8140f34afaf55aa76cede5f1\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:43:40.386833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3295784741.mount: Deactivated successfully. Oct 2 19:43:40.389912 env[1145]: time="2023-10-02T19:43:40.389872848Z" level=info msg="CreateContainer within sandbox \"0100a2e264d4878120ffcb8fa965531cf21ab6cd8140f34afaf55aa76cede5f1\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"5028baf335bcfad95f990d14d0fcf214ad6b9bb5cba931e868b0b20344f2351d\"" Oct 2 19:43:40.390552 env[1145]: time="2023-10-02T19:43:40.390473688Z" level=info msg="StartContainer for \"5028baf335bcfad95f990d14d0fcf214ad6b9bb5cba931e868b0b20344f2351d\"" Oct 2 19:43:40.408920 systemd[1]: Started cri-containerd-5028baf335bcfad95f990d14d0fcf214ad6b9bb5cba931e868b0b20344f2351d.scope. Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.437903 kernel: kauditd_printk_skb: 109 callbacks suppressed Oct 2 19:43:40.437983 kernel: audit: type=1400 audit(1696275820.433:615): avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.438004 kernel: audit: type=1300 audit(1696275820.433:615): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1545 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.433000 audit[1621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1545 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323862616633333562636661643935663939306431346430666366 Oct 2 19:43:40.444728 kernel: audit: type=1327 audit(1696275820.433:615): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323862616633333562636661643935663939306431346430666366 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.447521 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.450911 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.450982 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.456092 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.456168 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.459558 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.463638 kernel: audit: type=1400 audit(1696275820.433:616): avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit: BPF prog-id=75 op=LOAD Oct 2 19:43:40.433000 audit[1621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=1545 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323862616633333562636661643935663939306431346430666366 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.433000 audit: BPF prog-id=76 op=LOAD Oct 2 19:43:40.433000 audit[1621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=1545 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.433000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323862616633333562636661643935663939306431346430666366 Oct 2 19:43:40.436000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:43:40.436000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { perfmon } for pid=1621 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit[1621]: AVC avc: denied { bpf } for pid=1621 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:43:40.436000 audit: BPF prog-id=77 op=LOAD Oct 2 19:43:40.436000 audit[1621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=1545 pid=1621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.436000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3530323862616633333562636661643935663939306431346430666366 Oct 2 19:43:40.465796 env[1145]: time="2023-10-02T19:43:40.465730008Z" level=info msg="StartContainer for \"5028baf335bcfad95f990d14d0fcf214ad6b9bb5cba931e868b0b20344f2351d\" returns successfully" Oct 2 19:43:40.533225 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:43:40.533356 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:43:40.533846 kernel: IPVS: ipvs loaded. Oct 2 19:43:40.542533 kernel: IPVS: [rr] scheduler registered. Oct 2 19:43:40.548534 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:43:40.553532 kernel: IPVS: [sh] scheduler registered. Oct 2 19:43:40.610000 audit[1679]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1679 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.610000 audit[1679]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9178a40 a2=0 a3=ffffa24cd6c0 items=0 ppid=1630 pid=1679 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.610000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:43:40.611000 audit[1680]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1680 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.611000 audit[1680]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe0f2f00 a2=0 a3=ffffb24ae6c0 items=0 ppid=1630 pid=1680 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.611000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:43:40.612000 audit[1681]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1681 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.612000 audit[1681]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe5a436b0 a2=0 a3=ffffa73b46c0 items=0 ppid=1630 pid=1681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.612000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:43:40.613000 audit[1683]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=1683 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.613000 audit[1683]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc1d5b940 a2=0 a3=ffffb068b6c0 items=0 ppid=1630 pid=1683 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.613000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:43:40.614000 audit[1682]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1682 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.614000 audit[1682]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6a5f620 a2=0 a3=ffff884b16c0 items=0 ppid=1630 pid=1682 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.614000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:43:40.616000 audit[1684]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1684 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.616000 audit[1684]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe8dbc460 a2=0 a3=ffffaeea46c0 items=0 ppid=1630 pid=1684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.616000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:43:40.630981 kubelet[1444]: E1002 19:43:40.630811 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.646104 kubelet[1444]: E1002 19:43:40.646058 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:40.690246 kubelet[1444]: E1002 19:43:40.690200 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:40.714000 audit[1685]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1685 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.714000 audit[1685]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff734b390 a2=0 a3=ffffb3fa76c0 items=0 ppid=1630 pid=1685 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.714000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:43:40.717000 audit[1687]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1687 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.717000 audit[1687]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffea447da0 a2=0 a3=ffffb1f9f6c0 items=0 ppid=1630 pid=1687 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.717000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:43:40.720000 audit[1690]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1690 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.720000 audit[1690]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd6d6c810 a2=0 a3=ffffaed8f6c0 items=0 ppid=1630 pid=1690 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.720000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:43:40.722000 audit[1691]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1691 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.722000 audit[1691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc5037750 a2=0 a3=ffff97a786c0 items=0 ppid=1630 pid=1691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.722000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:43:40.725000 audit[1693]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1693 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.725000 audit[1693]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc7b439f0 a2=0 a3=ffffb534a6c0 items=0 ppid=1630 pid=1693 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.725000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:43:40.726000 audit[1694]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1694 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.726000 audit[1694]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe0189be0 a2=0 a3=ffffac77a6c0 items=0 ppid=1630 pid=1694 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.726000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:43:40.728000 audit[1696]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1696 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.728000 audit[1696]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffa143ee0 a2=0 a3=ffff9aadd6c0 items=0 ppid=1630 pid=1696 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.728000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:43:40.732000 audit[1699]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1699 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.732000 audit[1699]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffffb708df0 a2=0 a3=ffff902976c0 items=0 ppid=1630 pid=1699 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.732000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:43:40.733000 audit[1700]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1700 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.733000 audit[1700]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff017d100 a2=0 a3=ffff8c0fc6c0 items=0 ppid=1630 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.733000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:43:40.736000 audit[1702]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1702 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.736000 audit[1702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1128750 a2=0 a3=ffffaa6306c0 items=0 ppid=1630 pid=1702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.736000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:43:40.737000 audit[1703]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1703 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.737000 audit[1703]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc275b930 a2=0 a3=ffff81dac6c0 items=0 ppid=1630 pid=1703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.737000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:43:40.739000 audit[1705]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1705 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.739000 audit[1705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd7a67980 a2=0 a3=ffffbec496c0 items=0 ppid=1630 pid=1705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.739000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:40.743000 audit[1708]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1708 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.743000 audit[1708]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffce019860 a2=0 a3=ffff93e256c0 items=0 ppid=1630 pid=1708 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.743000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:40.746000 audit[1711]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1711 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.746000 audit[1711]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3272990 a2=0 a3=ffff9bcd56c0 items=0 ppid=1630 pid=1711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.746000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:43:40.747000 audit[1712]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1712 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.747000 audit[1712]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc18125b0 a2=0 a3=ffff8f77b6c0 items=0 ppid=1630 pid=1712 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.747000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:43:40.750000 audit[1714]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1714 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.750000 audit[1714]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff7426750 a2=0 a3=ffff9d8026c0 items=0 ppid=1630 pid=1714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.750000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:40.753000 audit[1717]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1717 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:43:40.753000 audit[1717]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffe56662c0 a2=0 a3=ffffbceb66c0 items=0 ppid=1630 pid=1717 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.753000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:40.767000 audit[1721]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:43:40.767000 audit[1721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffc9b513e0 a2=0 a3=ffffa8e756c0 items=0 ppid=1630 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.767000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:40.774000 audit[1721]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1721 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:43:40.774000 audit[1721]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffc9b513e0 a2=0 a3=ffffa8e756c0 items=0 ppid=1630 pid=1721 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.774000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:40.776000 audit[1725]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1725 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.776000 audit[1725]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc85b3500 a2=0 a3=ffff9f6b06c0 items=0 ppid=1630 pid=1725 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.776000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:43:40.779000 audit[1727]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1727 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.779000 audit[1727]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc354e0f0 a2=0 a3=ffffa74076c0 items=0 ppid=1630 pid=1727 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.779000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:43:40.782000 audit[1730]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1730 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.782000 audit[1730]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc80c9530 a2=0 a3=ffffb733e6c0 items=0 ppid=1630 pid=1730 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.782000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:43:40.783000 audit[1731]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1731 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.783000 audit[1731]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffda052250 a2=0 a3=ffff9f9986c0 items=0 ppid=1630 pid=1731 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.783000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:43:40.785000 audit[1733]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1733 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.785000 audit[1733]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff568adc0 a2=0 a3=ffffa0dd16c0 items=0 ppid=1630 pid=1733 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.785000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:43:40.786000 audit[1734]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1734 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.786000 audit[1734]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd660dac0 a2=0 a3=ffffac3a36c0 items=0 ppid=1630 pid=1734 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.786000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:43:40.789000 audit[1736]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1736 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.789000 audit[1736]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd1d9ef60 a2=0 a3=ffffaf51e6c0 items=0 ppid=1630 pid=1736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.789000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:43:40.793000 audit[1739]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1739 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.793000 audit[1739]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffb49c9f0 a2=0 a3=ffffaa9bf6c0 items=0 ppid=1630 pid=1739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.793000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:43:40.795000 audit[1740]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1740 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.795000 audit[1740]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd9309530 a2=0 a3=ffff9591e6c0 items=0 ppid=1630 pid=1740 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.795000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:43:40.798000 audit[1742]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1742 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.798000 audit[1742]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd6986470 a2=0 a3=ffffacbb36c0 items=0 ppid=1630 pid=1742 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.798000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:43:40.799000 audit[1743]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1743 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.799000 audit[1743]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd03edbd0 a2=0 a3=ffff923746c0 items=0 ppid=1630 pid=1743 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.799000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:43:40.802000 audit[1745]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1745 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.802000 audit[1745]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffec76d130 a2=0 a3=ffff9cea56c0 items=0 ppid=1630 pid=1745 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.802000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:43:40.805000 audit[1748]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1748 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.805000 audit[1748]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe7928fc0 a2=0 a3=ffff81df86c0 items=0 ppid=1630 pid=1748 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.805000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:43:40.809000 audit[1751]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1751 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.809000 audit[1751]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc2b83370 a2=0 a3=ffff808226c0 items=0 ppid=1630 pid=1751 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.809000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:43:40.810000 audit[1752]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.810000 audit[1752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffdd599d50 a2=0 a3=ffffbf6a86c0 items=0 ppid=1630 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.810000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:43:40.812000 audit[1754]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.812000 audit[1754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff8034d10 a2=0 a3=ffff91de96c0 items=0 ppid=1630 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.812000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:40.816000 audit[1757]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:43:40.816000 audit[1757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd0e17410 a2=0 a3=ffffb135a6c0 items=0 ppid=1630 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.816000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:43:40.821000 audit[1761]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:43:40.821000 audit[1761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffd908b670 a2=0 a3=ffff926b76c0 items=0 ppid=1630 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.821000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:40.821000 audit[1761]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1761 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:43:40.821000 audit[1761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1860 a0=3 a1=ffffd908b670 a2=0 a3=ffff926b76c0 items=0 ppid=1630 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:43:40.821000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:43:40.855552 kubelet[1444]: E1002 19:43:40.855522 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:41.646256 kubelet[1444]: E1002 19:43:41.646194 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:41.856739 kubelet[1444]: E1002 19:43:41.856697 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:42.646361 kubelet[1444]: E1002 19:43:42.646286 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:43.647006 kubelet[1444]: E1002 19:43:43.646954 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:44.079527 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount455873345.mount: Deactivated successfully. Oct 2 19:43:44.647299 kubelet[1444]: E1002 19:43:44.647253 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:45.647982 kubelet[1444]: E1002 19:43:45.647939 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:45.690874 kubelet[1444]: E1002 19:43:45.690842 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:46.430939 env[1145]: time="2023-10-02T19:43:46.430884446Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:46.432451 env[1145]: time="2023-10-02T19:43:46.432409943Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:46.433914 env[1145]: time="2023-10-02T19:43:46.433880401Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:43:46.435178 env[1145]: time="2023-10-02T19:43:46.435142822Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:4204f456d3e4a8a7ac29109cf66dfd9b53e82d3f2e8574599e358096d890b8db\"" Oct 2 19:43:46.436759 env[1145]: time="2023-10-02T19:43:46.436727838Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:43:46.443843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1904637414.mount: Deactivated successfully. Oct 2 19:43:46.455409 env[1145]: time="2023-10-02T19:43:46.455350197Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" Oct 2 19:43:46.455906 env[1145]: time="2023-10-02T19:43:46.455812470Z" level=info msg="StartContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" Oct 2 19:43:46.472780 systemd[1]: Started cri-containerd-d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d.scope. Oct 2 19:43:46.497382 systemd[1]: cri-containerd-d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d.scope: Deactivated successfully. Oct 2 19:43:46.650017 kubelet[1444]: E1002 19:43:46.649831 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:46.755578 env[1145]: time="2023-10-02T19:43:46.755529265Z" level=info msg="shim disconnected" id=d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d Oct 2 19:43:46.755578 env[1145]: time="2023-10-02T19:43:46.755578665Z" level=warning msg="cleaning up after shim disconnected" id=d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d namespace=k8s.io Oct 2 19:43:46.755578 env[1145]: time="2023-10-02T19:43:46.755587585Z" level=info msg="cleaning up dead shim" Oct 2 19:43:46.764026 env[1145]: time="2023-10-02T19:43:46.763969578Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1790 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:46.764340 env[1145]: time="2023-10-02T19:43:46.764239454Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:43:46.764547 env[1145]: time="2023-10-02T19:43:46.764483050Z" level=error msg="Failed to pipe stderr of container \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" error="reading from a closed fifo" Oct 2 19:43:46.769617 env[1145]: time="2023-10-02T19:43:46.769576853Z" level=error msg="Failed to pipe stdout of container \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" error="reading from a closed fifo" Oct 2 19:43:46.803102 env[1145]: time="2023-10-02T19:43:46.803034788Z" level=error msg="StartContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:46.803475 kubelet[1444]: E1002 19:43:46.803297 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d" Oct 2 19:43:46.803475 kubelet[1444]: E1002 19:43:46.803417 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:46.803475 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:46.803475 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:43:46.803642 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mrv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:46.803723 kubelet[1444]: E1002 19:43:46.803451 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:43:46.864910 kubelet[1444]: E1002 19:43:46.864874 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:46.866833 env[1145]: time="2023-10-02T19:43:46.866791586Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:43:46.878692 env[1145]: time="2023-10-02T19:43:46.878632487Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" Oct 2 19:43:46.879413 env[1145]: time="2023-10-02T19:43:46.879375356Z" level=info msg="StartContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" Oct 2 19:43:46.894078 systemd[1]: Started cri-containerd-8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca.scope. Oct 2 19:43:46.922921 systemd[1]: cri-containerd-8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca.scope: Deactivated successfully. Oct 2 19:43:46.944870 env[1145]: time="2023-10-02T19:43:46.943270031Z" level=info msg="shim disconnected" id=8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca Oct 2 19:43:46.944870 env[1145]: time="2023-10-02T19:43:46.944835368Z" level=warning msg="cleaning up after shim disconnected" id=8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca namespace=k8s.io Oct 2 19:43:46.945800 env[1145]: time="2023-10-02T19:43:46.945637356Z" level=info msg="cleaning up dead shim" Oct 2 19:43:46.955762 env[1145]: time="2023-10-02T19:43:46.954788658Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:43:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1825 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:43:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:43:46.955762 env[1145]: time="2023-10-02T19:43:46.955033494Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:43:46.955762 env[1145]: time="2023-10-02T19:43:46.955552486Z" level=error msg="Failed to pipe stdout of container \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" error="reading from a closed fifo" Oct 2 19:43:46.955762 env[1145]: time="2023-10-02T19:43:46.955558006Z" level=error msg="Failed to pipe stderr of container \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" error="reading from a closed fifo" Oct 2 19:43:46.957298 env[1145]: time="2023-10-02T19:43:46.957172862Z" level=error msg="StartContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:43:46.957442 kubelet[1444]: E1002 19:43:46.957405 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca" Oct 2 19:43:46.957633 kubelet[1444]: E1002 19:43:46.957534 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:43:46.957633 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:43:46.957633 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:43:46.957633 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mrv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:43:46.957877 kubelet[1444]: E1002 19:43:46.957570 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:43:47.442225 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d-rootfs.mount: Deactivated successfully. Oct 2 19:43:47.651071 kubelet[1444]: E1002 19:43:47.651006 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:47.868612 kubelet[1444]: I1002 19:43:47.868314 1444 scope.go:115] "RemoveContainer" containerID="d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d" Oct 2 19:43:47.868612 kubelet[1444]: I1002 19:43:47.868584 1444 scope.go:115] "RemoveContainer" containerID="d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d" Oct 2 19:43:47.869636 env[1145]: time="2023-10-02T19:43:47.869601827Z" level=info msg="RemoveContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" Oct 2 19:43:47.869898 env[1145]: time="2023-10-02T19:43:47.869625987Z" level=info msg="RemoveContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\"" Oct 2 19:43:47.869898 env[1145]: time="2023-10-02T19:43:47.869762865Z" level=error msg="RemoveContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\" failed" error="failed to set removing state for container \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\": container is already in removing state" Oct 2 19:43:47.869957 kubelet[1444]: E1002 19:43:47.869861 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\": container is already in removing state" containerID="d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d" Oct 2 19:43:47.869957 kubelet[1444]: E1002 19:43:47.869906 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d": container is already in removing state; Skipping pod "cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)" Oct 2 19:43:47.870011 kubelet[1444]: E1002 19:43:47.869968 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:47.870172 kubelet[1444]: E1002 19:43:47.870159 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:43:47.871927 env[1145]: time="2023-10-02T19:43:47.871872435Z" level=info msg="RemoveContainer for \"d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d\" returns successfully" Oct 2 19:43:48.652045 kubelet[1444]: E1002 19:43:48.651994 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:48.870770 kubelet[1444]: E1002 19:43:48.870744 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:43:48.870967 kubelet[1444]: E1002 19:43:48.870949 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:43:49.659238 kubelet[1444]: E1002 19:43:49.659176 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:49.861636 kubelet[1444]: W1002 19:43:49.861583 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice/cri-containerd-d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d.scope WatchSource:0}: container "d477c1192fee90ec0b48cf52ab663d18f5c50628efe913a3d1b0fb11bccb532d" in namespace "k8s.io": not found Oct 2 19:43:50.659610 kubelet[1444]: E1002 19:43:50.659573 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:50.691283 kubelet[1444]: E1002 19:43:50.691261 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:51.660237 kubelet[1444]: E1002 19:43:51.660189 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.660526 kubelet[1444]: E1002 19:43:52.660473 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:52.968492 kubelet[1444]: W1002 19:43:52.968376 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice/cri-containerd-8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca.scope WatchSource:0}: task 8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca not found: not found Oct 2 19:43:53.660978 kubelet[1444]: E1002 19:43:53.660935 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:54.662056 kubelet[1444]: E1002 19:43:54.662005 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.662158 kubelet[1444]: E1002 19:43:55.662106 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:55.692850 kubelet[1444]: E1002 19:43:55.692823 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:43:56.662293 kubelet[1444]: E1002 19:43:56.662252 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:57.663202 kubelet[1444]: E1002 19:43:57.663154 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:58.663515 kubelet[1444]: E1002 19:43:58.663448 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:43:59.434851 update_engine[1134]: I1002 19:43:59.434795 1134 update_attempter.cc:505] Updating boot flags... Oct 2 19:43:59.663808 kubelet[1444]: E1002 19:43:59.663748 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.630456 kubelet[1444]: E1002 19:44:00.630411 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.664930 kubelet[1444]: E1002 19:44:00.664888 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:00.693464 kubelet[1444]: E1002 19:44:00.693410 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:00.824980 kubelet[1444]: E1002 19:44:00.824831 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:00.826934 env[1145]: time="2023-10-02T19:44:00.826886321Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:44:00.835367 env[1145]: time="2023-10-02T19:44:00.835276750Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" Oct 2 19:44:00.836175 env[1145]: time="2023-10-02T19:44:00.836151544Z" level=info msg="StartContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" Oct 2 19:44:00.853911 systemd[1]: Started cri-containerd-e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2.scope. Oct 2 19:44:00.870658 systemd[1]: cri-containerd-e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2.scope: Deactivated successfully. Oct 2 19:44:00.881391 env[1145]: time="2023-10-02T19:44:00.881211669Z" level=info msg="shim disconnected" id=e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2 Oct 2 19:44:00.881391 env[1145]: time="2023-10-02T19:44:00.881329708Z" level=warning msg="cleaning up after shim disconnected" id=e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2 namespace=k8s.io Oct 2 19:44:00.881391 env[1145]: time="2023-10-02T19:44:00.881339468Z" level=info msg="cleaning up dead shim" Oct 2 19:44:00.890702 env[1145]: time="2023-10-02T19:44:00.890620451Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1879 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:44:00.890930 env[1145]: time="2023-10-02T19:44:00.890864930Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:44:00.891204 env[1145]: time="2023-10-02T19:44:00.891041209Z" level=error msg="Failed to pipe stdout of container \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" error="reading from a closed fifo" Oct 2 19:44:00.891204 env[1145]: time="2023-10-02T19:44:00.891106928Z" level=error msg="Failed to pipe stderr of container \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" error="reading from a closed fifo" Oct 2 19:44:00.892688 env[1145]: time="2023-10-02T19:44:00.892641359Z" level=error msg="StartContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:44:00.893125 kubelet[1444]: E1002 19:44:00.892940 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2" Oct 2 19:44:00.893125 kubelet[1444]: E1002 19:44:00.893042 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:44:00.893125 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:44:00.893125 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:44:00.893317 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mrv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:44:00.893392 kubelet[1444]: E1002 19:44:00.893105 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:01.665896 kubelet[1444]: E1002 19:44:01.665840 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:01.832990 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2-rootfs.mount: Deactivated successfully. Oct 2 19:44:01.893565 kubelet[1444]: I1002 19:44:01.893541 1444 scope.go:115] "RemoveContainer" containerID="8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca" Oct 2 19:44:01.893964 kubelet[1444]: I1002 19:44:01.893867 1444 scope.go:115] "RemoveContainer" containerID="8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca" Oct 2 19:44:01.894974 env[1145]: time="2023-10-02T19:44:01.894937407Z" level=info msg="RemoveContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" Oct 2 19:44:01.895468 env[1145]: time="2023-10-02T19:44:01.895440164Z" level=info msg="RemoveContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\"" Oct 2 19:44:01.895668 env[1145]: time="2023-10-02T19:44:01.895633363Z" level=error msg="RemoveContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\" failed" error="failed to set removing state for container \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\": container is already in removing state" Oct 2 19:44:01.895930 kubelet[1444]: E1002 19:44:01.895874 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\": container is already in removing state" containerID="8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca" Oct 2 19:44:01.895930 kubelet[1444]: I1002 19:44:01.895908 1444 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca} err="rpc error: code = Unknown desc = failed to set removing state for container \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\": container is already in removing state" Oct 2 19:44:01.897469 env[1145]: time="2023-10-02T19:44:01.897437153Z" level=info msg="RemoveContainer for \"8eba7c40c3910593d4f6a6a3e9385f1e7a96ab19b008e4a42d91846bf28579ca\" returns successfully" Oct 2 19:44:01.898119 kubelet[1444]: E1002 19:44:01.897685 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:01.898119 kubelet[1444]: E1002 19:44:01.897887 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:02.666263 kubelet[1444]: E1002 19:44:02.666219 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:03.667402 kubelet[1444]: E1002 19:44:03.667339 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:03.985599 kubelet[1444]: W1002 19:44:03.985466 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice/cri-containerd-e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2.scope WatchSource:0}: task e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2 not found: not found Oct 2 19:44:04.667966 kubelet[1444]: E1002 19:44:04.667903 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:05.668164 kubelet[1444]: E1002 19:44:05.668124 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:05.694827 kubelet[1444]: E1002 19:44:05.694776 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:06.668867 kubelet[1444]: E1002 19:44:06.668825 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:07.669449 kubelet[1444]: E1002 19:44:07.669412 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:08.670340 kubelet[1444]: E1002 19:44:08.670295 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:09.670974 kubelet[1444]: E1002 19:44:09.670930 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:10.671655 kubelet[1444]: E1002 19:44:10.671599 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:10.696073 kubelet[1444]: E1002 19:44:10.696035 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:11.672628 kubelet[1444]: E1002 19:44:11.672574 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:12.673667 kubelet[1444]: E1002 19:44:12.673601 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:13.673957 kubelet[1444]: E1002 19:44:13.673897 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:14.674254 kubelet[1444]: E1002 19:44:14.674184 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:14.824517 kubelet[1444]: E1002 19:44:14.824368 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:14.824671 kubelet[1444]: E1002 19:44:14.824585 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:15.674957 kubelet[1444]: E1002 19:44:15.674917 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:15.696643 kubelet[1444]: E1002 19:44:15.696605 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:16.675965 kubelet[1444]: E1002 19:44:16.675931 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:17.677199 kubelet[1444]: E1002 19:44:17.677155 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:18.677697 kubelet[1444]: E1002 19:44:18.677663 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:19.678610 kubelet[1444]: E1002 19:44:19.678549 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:20.630186 kubelet[1444]: E1002 19:44:20.630127 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:20.679681 kubelet[1444]: E1002 19:44:20.679641 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:20.697012 kubelet[1444]: E1002 19:44:20.696985 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:21.680356 kubelet[1444]: E1002 19:44:21.680319 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:22.681766 kubelet[1444]: E1002 19:44:22.681716 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:23.682494 kubelet[1444]: E1002 19:44:23.682451 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:24.682944 kubelet[1444]: E1002 19:44:24.682905 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:25.684059 kubelet[1444]: E1002 19:44:25.684014 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:25.698017 kubelet[1444]: E1002 19:44:25.697992 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:26.684869 kubelet[1444]: E1002 19:44:26.684829 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:27.686276 kubelet[1444]: E1002 19:44:27.686228 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:27.824756 kubelet[1444]: E1002 19:44:27.824705 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:27.826633 env[1145]: time="2023-10-02T19:44:27.826583263Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:44:27.836426 env[1145]: time="2023-10-02T19:44:27.836366652Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" Oct 2 19:44:27.836866 env[1145]: time="2023-10-02T19:44:27.836833372Z" level=info msg="StartContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" Oct 2 19:44:27.855407 systemd[1]: Started cri-containerd-be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132.scope. Oct 2 19:44:27.874019 systemd[1]: cri-containerd-be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132.scope: Deactivated successfully. Oct 2 19:44:27.877557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132-rootfs.mount: Deactivated successfully. Oct 2 19:44:27.884935 env[1145]: time="2023-10-02T19:44:27.884892400Z" level=info msg="shim disconnected" id=be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132 Oct 2 19:44:27.885136 env[1145]: time="2023-10-02T19:44:27.885117360Z" level=warning msg="cleaning up after shim disconnected" id=be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132 namespace=k8s.io Oct 2 19:44:27.885211 env[1145]: time="2023-10-02T19:44:27.885199400Z" level=info msg="cleaning up dead shim" Oct 2 19:44:27.893716 env[1145]: time="2023-10-02T19:44:27.893677631Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:44:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1922 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:44:27Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:44:27.894092 env[1145]: time="2023-10-02T19:44:27.894038791Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:44:27.894284 env[1145]: time="2023-10-02T19:44:27.894240070Z" level=error msg="Failed to pipe stdout of container \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" error="reading from a closed fifo" Oct 2 19:44:27.894387 env[1145]: time="2023-10-02T19:44:27.894346430Z" level=error msg="Failed to pipe stderr of container \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" error="reading from a closed fifo" Oct 2 19:44:27.895550 env[1145]: time="2023-10-02T19:44:27.895495989Z" level=error msg="StartContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:44:27.895734 kubelet[1444]: E1002 19:44:27.895714 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132" Oct 2 19:44:27.895829 kubelet[1444]: E1002 19:44:27.895815 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:44:27.895829 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:44:27.895829 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:44:27.895829 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mrv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:44:27.895958 kubelet[1444]: E1002 19:44:27.895854 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:27.935325 kubelet[1444]: I1002 19:44:27.935008 1444 scope.go:115] "RemoveContainer" containerID="e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2" Oct 2 19:44:27.935557 kubelet[1444]: I1002 19:44:27.935533 1444 scope.go:115] "RemoveContainer" containerID="e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2" Oct 2 19:44:27.936714 env[1145]: time="2023-10-02T19:44:27.936646625Z" level=info msg="RemoveContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" Oct 2 19:44:27.938017 env[1145]: time="2023-10-02T19:44:27.937983823Z" level=info msg="RemoveContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\"" Oct 2 19:44:27.938263 env[1145]: time="2023-10-02T19:44:27.938189423Z" level=error msg="RemoveContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\" failed" error="rpc error: code = NotFound desc = get container info: container \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\" in namespace \"k8s.io\": not found" Oct 2 19:44:27.938401 kubelet[1444]: E1002 19:44:27.938382 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\" in namespace \"k8s.io\": not found" containerID="e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2" Oct 2 19:44:27.938459 kubelet[1444]: E1002 19:44:27.938443 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2" in namespace "k8s.io": not found; Skipping pod "cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)" Oct 2 19:44:27.938539 kubelet[1444]: E1002 19:44:27.938520 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:27.938903 kubelet[1444]: E1002 19:44:27.938874 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:27.939238 env[1145]: time="2023-10-02T19:44:27.939209382Z" level=info msg="RemoveContainer for \"e9b5d51cec94b7cc54cd4c32f0d662248c80a9ab4edfd33efb2de5932fd90ea2\" returns successfully" Oct 2 19:44:28.686695 kubelet[1444]: E1002 19:44:28.686648 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:29.687261 kubelet[1444]: E1002 19:44:29.687210 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:30.687962 kubelet[1444]: E1002 19:44:30.687912 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:30.699254 kubelet[1444]: E1002 19:44:30.699222 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:30.992634 kubelet[1444]: W1002 19:44:30.992509 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice/cri-containerd-be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132.scope WatchSource:0}: task be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132 not found: not found Oct 2 19:44:31.688128 kubelet[1444]: E1002 19:44:31.688071 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:32.688651 kubelet[1444]: E1002 19:44:32.688596 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:33.689157 kubelet[1444]: E1002 19:44:33.689108 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:34.690127 kubelet[1444]: E1002 19:44:34.690073 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:35.690384 kubelet[1444]: E1002 19:44:35.690345 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:35.699902 kubelet[1444]: E1002 19:44:35.699885 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:36.691095 kubelet[1444]: E1002 19:44:36.691052 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:37.691979 kubelet[1444]: E1002 19:44:37.691916 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:38.692618 kubelet[1444]: E1002 19:44:38.692561 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:39.693186 kubelet[1444]: E1002 19:44:39.693136 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:39.825145 kubelet[1444]: E1002 19:44:39.825117 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:39.825517 kubelet[1444]: E1002 19:44:39.825482 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:40.630856 kubelet[1444]: E1002 19:44:40.630827 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:40.694227 kubelet[1444]: E1002 19:44:40.694187 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:40.700612 kubelet[1444]: E1002 19:44:40.700585 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:41.694782 kubelet[1444]: E1002 19:44:41.694749 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:42.695256 kubelet[1444]: E1002 19:44:42.695178 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:43.696731 kubelet[1444]: E1002 19:44:43.696699 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:44.697252 kubelet[1444]: E1002 19:44:44.697206 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:45.697698 kubelet[1444]: E1002 19:44:45.697654 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:45.701435 kubelet[1444]: E1002 19:44:45.701417 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:46.698089 kubelet[1444]: E1002 19:44:46.698048 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:47.698545 kubelet[1444]: E1002 19:44:47.698497 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:48.699577 kubelet[1444]: E1002 19:44:48.699541 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:49.700632 kubelet[1444]: E1002 19:44:49.700587 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:50.700685 kubelet[1444]: E1002 19:44:50.700653 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:50.701735 kubelet[1444]: E1002 19:44:50.701717 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:51.701252 kubelet[1444]: E1002 19:44:51.701217 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:52.701620 kubelet[1444]: E1002 19:44:52.701577 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:53.702141 kubelet[1444]: E1002 19:44:53.702101 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:54.703348 kubelet[1444]: E1002 19:44:54.703299 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:54.826286 kubelet[1444]: E1002 19:44:54.826259 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:54.826540 kubelet[1444]: E1002 19:44:54.826523 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:44:55.702750 kubelet[1444]: E1002 19:44:55.702708 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:44:55.703782 kubelet[1444]: E1002 19:44:55.703765 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:56.704356 kubelet[1444]: E1002 19:44:56.704320 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:56.824541 kubelet[1444]: E1002 19:44:56.824494 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:44:57.705692 kubelet[1444]: E1002 19:44:57.705648 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:58.706638 kubelet[1444]: E1002 19:44:58.706602 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:44:59.707220 kubelet[1444]: E1002 19:44:59.707169 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:00.630388 kubelet[1444]: E1002 19:45:00.630342 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:00.703158 kubelet[1444]: E1002 19:45:00.703127 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:00.708285 kubelet[1444]: E1002 19:45:00.708260 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:01.708627 kubelet[1444]: E1002 19:45:01.708577 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:02.709160 kubelet[1444]: E1002 19:45:02.709104 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:03.710119 kubelet[1444]: E1002 19:45:03.710081 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:04.710972 kubelet[1444]: E1002 19:45:04.710934 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:05.704395 kubelet[1444]: E1002 19:45:05.704365 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:05.711556 kubelet[1444]: E1002 19:45:05.711530 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:05.824736 kubelet[1444]: E1002 19:45:05.824703 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:05.824922 kubelet[1444]: E1002 19:45:05.824907 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:06.712317 kubelet[1444]: E1002 19:45:06.712274 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:07.713039 kubelet[1444]: E1002 19:45:07.712963 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:08.713579 kubelet[1444]: E1002 19:45:08.713536 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:09.713704 kubelet[1444]: E1002 19:45:09.713657 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:10.705144 kubelet[1444]: E1002 19:45:10.705117 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:10.714341 kubelet[1444]: E1002 19:45:10.714311 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:11.715383 kubelet[1444]: E1002 19:45:11.715338 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:12.715666 kubelet[1444]: E1002 19:45:12.715621 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:13.716325 kubelet[1444]: E1002 19:45:13.716282 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:14.717413 kubelet[1444]: E1002 19:45:14.717378 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:15.706714 kubelet[1444]: E1002 19:45:15.706639 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:15.718387 kubelet[1444]: E1002 19:45:15.718341 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:16.718826 kubelet[1444]: E1002 19:45:16.718742 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:17.719822 kubelet[1444]: E1002 19:45:17.719781 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:17.824903 kubelet[1444]: E1002 19:45:17.824870 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:17.826727 env[1145]: time="2023-10-02T19:45:17.826688101Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:45:17.833902 env[1145]: time="2023-10-02T19:45:17.833863095Z" level=info msg="CreateContainer within sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\"" Oct 2 19:45:17.834412 env[1145]: time="2023-10-02T19:45:17.834381738Z" level=info msg="StartContainer for \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\"" Oct 2 19:45:17.849088 systemd[1]: Started cri-containerd-a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4.scope. Oct 2 19:45:17.898376 systemd[1]: cri-containerd-a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4.scope: Deactivated successfully. Oct 2 19:45:17.901425 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4-rootfs.mount: Deactivated successfully. Oct 2 19:45:17.906301 env[1145]: time="2023-10-02T19:45:17.906255397Z" level=info msg="shim disconnected" id=a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4 Oct 2 19:45:17.906452 env[1145]: time="2023-10-02T19:45:17.906304477Z" level=warning msg="cleaning up after shim disconnected" id=a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4 namespace=k8s.io Oct 2 19:45:17.906452 env[1145]: time="2023-10-02T19:45:17.906312997Z" level=info msg="cleaning up dead shim" Oct 2 19:45:17.914120 env[1145]: time="2023-10-02T19:45:17.914076673Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:45:17Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1963 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:45:17Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:45:17.914362 env[1145]: time="2023-10-02T19:45:17.914312674Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:45:17.914535 env[1145]: time="2023-10-02T19:45:17.914476155Z" level=error msg="Failed to pipe stdout of container \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\"" error="reading from a closed fifo" Oct 2 19:45:17.914639 env[1145]: time="2023-10-02T19:45:17.914547316Z" level=error msg="Failed to pipe stderr of container \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\"" error="reading from a closed fifo" Oct 2 19:45:17.915849 env[1145]: time="2023-10-02T19:45:17.915813762Z" level=error msg="StartContainer for \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:45:17.916028 kubelet[1444]: E1002 19:45:17.916008 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4" Oct 2 19:45:17.916125 kubelet[1444]: E1002 19:45:17.916112 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:45:17.916125 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:45:17.916125 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:45:17.916125 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mrv5n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:45:17.916258 kubelet[1444]: E1002 19:45:17.916149 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:18.011668 kubelet[1444]: I1002 19:45:18.011631 1444 scope.go:115] "RemoveContainer" containerID="be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132" Oct 2 19:45:18.011929 kubelet[1444]: I1002 19:45:18.011901 1444 scope.go:115] "RemoveContainer" containerID="be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132" Oct 2 19:45:18.013125 env[1145]: time="2023-10-02T19:45:18.013071979Z" level=info msg="RemoveContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" Oct 2 19:45:18.013481 env[1145]: time="2023-10-02T19:45:18.013409981Z" level=info msg="RemoveContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\"" Oct 2 19:45:18.013673 env[1145]: time="2023-10-02T19:45:18.013640542Z" level=error msg="RemoveContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\" failed" error="failed to set removing state for container \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\": container is already in removing state" Oct 2 19:45:18.013913 kubelet[1444]: E1002 19:45:18.013892 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\": container is already in removing state" containerID="be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132" Oct 2 19:45:18.013972 kubelet[1444]: I1002 19:45:18.013931 1444 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132} err="rpc error: code = Unknown desc = failed to set removing state for container \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\": container is already in removing state" Oct 2 19:45:18.015417 env[1145]: time="2023-10-02T19:45:18.015378750Z" level=info msg="RemoveContainer for \"be316c6bbaddb213b3da9fb1b66eb2e2404a32062b082525830b92f79e3ab132\" returns successfully" Oct 2 19:45:18.015792 kubelet[1444]: E1002 19:45:18.015698 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:18.016415 kubelet[1444]: E1002 19:45:18.016391 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:18.721275 kubelet[1444]: E1002 19:45:18.721226 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:19.722364 kubelet[1444]: E1002 19:45:19.722331 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:20.630635 kubelet[1444]: E1002 19:45:20.630587 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:20.707603 kubelet[1444]: E1002 19:45:20.707559 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:20.723846 kubelet[1444]: E1002 19:45:20.723822 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:21.010919 kubelet[1444]: W1002 19:45:21.010887 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice/cri-containerd-a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4.scope WatchSource:0}: task a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4 not found: not found Oct 2 19:45:21.724096 kubelet[1444]: E1002 19:45:21.724038 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:22.725054 kubelet[1444]: E1002 19:45:22.724994 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:23.725902 kubelet[1444]: E1002 19:45:23.725850 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:24.726418 kubelet[1444]: E1002 19:45:24.726364 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:25.709584 kubelet[1444]: E1002 19:45:25.709546 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:25.727080 kubelet[1444]: E1002 19:45:25.727020 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:26.727856 kubelet[1444]: E1002 19:45:26.727789 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:27.728400 kubelet[1444]: E1002 19:45:27.728323 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:28.728869 kubelet[1444]: E1002 19:45:28.728824 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:29.729284 kubelet[1444]: E1002 19:45:29.729235 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:29.825147 kubelet[1444]: E1002 19:45:29.825107 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:29.825371 kubelet[1444]: E1002 19:45:29.825349 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:30.710421 kubelet[1444]: E1002 19:45:30.710390 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:30.729618 kubelet[1444]: E1002 19:45:30.729570 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:31.730083 kubelet[1444]: E1002 19:45:31.730032 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:32.730854 kubelet[1444]: E1002 19:45:32.730786 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:33.731087 kubelet[1444]: E1002 19:45:33.731037 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:34.731697 kubelet[1444]: E1002 19:45:34.731644 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:35.711540 kubelet[1444]: E1002 19:45:35.711495 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:35.732796 kubelet[1444]: E1002 19:45:35.732747 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:36.733453 kubelet[1444]: E1002 19:45:36.733406 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:37.734802 kubelet[1444]: E1002 19:45:37.734744 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:38.735523 kubelet[1444]: E1002 19:45:38.735467 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:39.735978 kubelet[1444]: E1002 19:45:39.735937 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:40.630549 kubelet[1444]: E1002 19:45:40.630515 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:40.712062 kubelet[1444]: E1002 19:45:40.712031 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:40.736475 kubelet[1444]: E1002 19:45:40.736423 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:41.737542 kubelet[1444]: E1002 19:45:41.737476 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:42.737905 kubelet[1444]: E1002 19:45:42.737854 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:42.825118 kubelet[1444]: E1002 19:45:42.825091 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:42.825463 kubelet[1444]: E1002 19:45:42.825448 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:43.738412 kubelet[1444]: E1002 19:45:43.738357 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:44.739456 kubelet[1444]: E1002 19:45:44.739406 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:45.713021 kubelet[1444]: E1002 19:45:45.712993 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:45.740227 kubelet[1444]: E1002 19:45:45.740195 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:46.740603 kubelet[1444]: E1002 19:45:46.740562 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:47.741465 kubelet[1444]: E1002 19:45:47.741415 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:48.741938 kubelet[1444]: E1002 19:45:48.741905 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:49.742969 kubelet[1444]: E1002 19:45:49.742905 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:50.713755 kubelet[1444]: E1002 19:45:50.713728 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:50.743912 kubelet[1444]: E1002 19:45:50.743882 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:51.744207 kubelet[1444]: E1002 19:45:51.744157 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:52.744475 kubelet[1444]: E1002 19:45:52.744406 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:53.745094 kubelet[1444]: E1002 19:45:53.745053 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:54.746178 kubelet[1444]: E1002 19:45:54.746138 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:55.714840 kubelet[1444]: E1002 19:45:55.714813 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:45:55.747102 kubelet[1444]: E1002 19:45:55.747072 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:55.825042 kubelet[1444]: E1002 19:45:55.825017 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:45:55.825394 kubelet[1444]: E1002 19:45:55.825378 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:45:56.748081 kubelet[1444]: E1002 19:45:56.748014 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:57.748448 kubelet[1444]: E1002 19:45:57.748410 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:58.749924 kubelet[1444]: E1002 19:45:58.749878 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:45:59.750561 kubelet[1444]: E1002 19:45:59.750494 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:00.630900 kubelet[1444]: E1002 19:46:00.630859 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:00.715769 kubelet[1444]: E1002 19:46:00.715744 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:00.751122 kubelet[1444]: E1002 19:46:00.751085 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:01.751662 kubelet[1444]: E1002 19:46:01.751597 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:02.752002 kubelet[1444]: E1002 19:46:02.751958 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:03.752308 kubelet[1444]: E1002 19:46:03.752259 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:04.752879 kubelet[1444]: E1002 19:46:04.752838 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:05.717088 kubelet[1444]: E1002 19:46:05.717053 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:05.753498 kubelet[1444]: E1002 19:46:05.753460 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:06.753926 kubelet[1444]: E1002 19:46:06.753853 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:07.754584 kubelet[1444]: E1002 19:46:07.754541 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:08.755656 kubelet[1444]: E1002 19:46:08.755625 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:08.824565 kubelet[1444]: E1002 19:46:08.824528 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:08.824758 kubelet[1444]: E1002 19:46:08.824743 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:46:09.756362 kubelet[1444]: E1002 19:46:09.756327 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:10.718288 kubelet[1444]: E1002 19:46:10.718263 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:10.757515 kubelet[1444]: E1002 19:46:10.757467 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:11.757810 kubelet[1444]: E1002 19:46:11.757758 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:12.758738 kubelet[1444]: E1002 19:46:12.758657 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:13.759886 kubelet[1444]: E1002 19:46:13.759853 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:14.760741 kubelet[1444]: E1002 19:46:14.760704 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:15.719860 kubelet[1444]: E1002 19:46:15.719836 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:15.761253 kubelet[1444]: E1002 19:46:15.761208 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:16.762240 kubelet[1444]: E1002 19:46:16.762201 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:17.763196 kubelet[1444]: E1002 19:46:17.763155 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:18.764133 kubelet[1444]: E1002 19:46:18.764097 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:19.765394 kubelet[1444]: E1002 19:46:19.765345 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:20.630568 kubelet[1444]: E1002 19:46:20.630520 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:20.720572 kubelet[1444]: E1002 19:46:20.720545 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:20.765837 kubelet[1444]: E1002 19:46:20.765807 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:21.767157 kubelet[1444]: E1002 19:46:21.767122 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:21.825067 kubelet[1444]: E1002 19:46:21.825034 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:21.825247 kubelet[1444]: E1002 19:46:21.825234 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:46:22.768032 kubelet[1444]: E1002 19:46:22.767981 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:23.768955 kubelet[1444]: E1002 19:46:23.768879 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:24.769352 kubelet[1444]: E1002 19:46:24.769289 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:25.721744 kubelet[1444]: E1002 19:46:25.721708 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:25.770223 kubelet[1444]: E1002 19:46:25.770175 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:25.825269 kubelet[1444]: E1002 19:46:25.825243 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:26.770385 kubelet[1444]: E1002 19:46:26.770327 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:27.771381 kubelet[1444]: E1002 19:46:27.771322 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:28.771873 kubelet[1444]: E1002 19:46:28.771796 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:29.772899 kubelet[1444]: E1002 19:46:29.772843 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:30.723087 kubelet[1444]: E1002 19:46:30.723043 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:30.773512 kubelet[1444]: E1002 19:46:30.773457 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:31.773940 kubelet[1444]: E1002 19:46:31.773872 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:32.775060 kubelet[1444]: E1002 19:46:32.775007 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:33.775814 kubelet[1444]: E1002 19:46:33.775772 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:34.776403 kubelet[1444]: E1002 19:46:34.776360 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:34.825160 kubelet[1444]: E1002 19:46:34.825130 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:34.825523 kubelet[1444]: E1002 19:46:34.825493 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-ndmgh_kube-system(50f8cf99-6fc8-4911-8d52-f292c9c5ec4c)\"" pod="kube-system/cilium-ndmgh" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c Oct 2 19:46:35.723802 kubelet[1444]: E1002 19:46:35.723757 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:35.777420 kubelet[1444]: E1002 19:46:35.777348 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:36.778373 kubelet[1444]: E1002 19:46:36.778327 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:37.779082 kubelet[1444]: E1002 19:46:37.779042 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:38.780122 kubelet[1444]: E1002 19:46:38.780078 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:39.780572 kubelet[1444]: E1002 19:46:39.780534 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:39.861137 env[1145]: time="2023-10-02T19:46:39.861083044Z" level=info msg="StopPodSandbox for \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\"" Oct 2 19:46:39.861521 env[1145]: time="2023-10-02T19:46:39.861159484Z" level=info msg="Container to stop \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:46:39.863520 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f-shm.mount: Deactivated successfully. Oct 2 19:46:39.868910 systemd[1]: cri-containerd-244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f.scope: Deactivated successfully. Oct 2 19:46:39.868000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:46:39.869867 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:46:39.869927 kernel: audit: type=1334 audit(1696275999.868:665): prog-id=71 op=UNLOAD Oct 2 19:46:39.874000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:46:39.876534 kernel: audit: type=1334 audit(1696275999.874:666): prog-id=74 op=UNLOAD Oct 2 19:46:39.892835 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f-rootfs.mount: Deactivated successfully. Oct 2 19:46:39.899520 env[1145]: time="2023-10-02T19:46:39.899454214Z" level=info msg="shim disconnected" id=244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f Oct 2 19:46:39.899520 env[1145]: time="2023-10-02T19:46:39.899517934Z" level=warning msg="cleaning up after shim disconnected" id=244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f namespace=k8s.io Oct 2 19:46:39.899520 env[1145]: time="2023-10-02T19:46:39.899527934Z" level=info msg="cleaning up dead shim" Oct 2 19:46:39.908820 env[1145]: time="2023-10-02T19:46:39.908776346Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2002 runtime=io.containerd.runc.v2\n" Oct 2 19:46:39.909111 env[1145]: time="2023-10-02T19:46:39.909083187Z" level=info msg="TearDown network for sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" successfully" Oct 2 19:46:39.909148 env[1145]: time="2023-10-02T19:46:39.909111387Z" level=info msg="StopPodSandbox for \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" returns successfully" Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088290 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088342 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-etc-cni-netd\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088364 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-lib-modules\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088439 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-net\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088467 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-clustermesh-secrets\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090234 kubelet[1444]: I1002 19:46:40.088484 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-xtables-lock\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088522 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-bpf-maps\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088542 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hubble-tls\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088559 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-run\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088577 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-kernel\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088594 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cni-path\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.090828 kubelet[1444]: I1002 19:46:40.088613 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mrv5n\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-kube-api-access-mrv5n\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088700 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-config-path\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088724 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-cgroup\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088740 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hostproc\") pod \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\" (UID: \"50f8cf99-6fc8-4911-8d52-f292c9c5ec4c\") " Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088764 1444 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-etc-cni-netd\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088790 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hostproc" (OuterVolumeSpecName: "hostproc") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091043 kubelet[1444]: I1002 19:46:40.088808 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091232 kubelet[1444]: I1002 19:46:40.088823 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091232 kubelet[1444]: I1002 19:46:40.089097 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091232 kubelet[1444]: I1002 19:46:40.089132 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091232 kubelet[1444]: I1002 19:46:40.089150 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.091232 kubelet[1444]: I1002 19:46:40.089431 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cni-path" (OuterVolumeSpecName: "cni-path") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.093505 kubelet[1444]: I1002 19:46:40.089608 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.093505 kubelet[1444]: I1002 19:46:40.089657 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:40.093505 kubelet[1444]: W1002 19:46:40.089684 1444 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:46:40.093505 kubelet[1444]: I1002 19:46:40.093060 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:46:40.092602 systemd[1]: var-lib-kubelet-pods-50f8cf99\x2d6fc8\x2d4911\x2d8d52\x2df292c9c5ec4c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:46:40.093797 systemd[1]: var-lib-kubelet-pods-50f8cf99\x2d6fc8\x2d4911\x2d8d52\x2df292c9c5ec4c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:46:40.094274 kubelet[1444]: I1002 19:46:40.094244 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:40.094472 kubelet[1444]: I1002 19:46:40.094443 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:46:40.098451 kubelet[1444]: I1002 19:46:40.096640 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-kube-api-access-mrv5n" (OuterVolumeSpecName: "kube-api-access-mrv5n") pod "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" (UID: "50f8cf99-6fc8-4911-8d52-f292c9c5ec4c"). InnerVolumeSpecName "kube-api-access-mrv5n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:40.096903 systemd[1]: var-lib-kubelet-pods-50f8cf99\x2d6fc8\x2d4911\x2d8d52\x2df292c9c5ec4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmrv5n.mount: Deactivated successfully. Oct 2 19:46:40.135974 kubelet[1444]: I1002 19:46:40.135928 1444 scope.go:115] "RemoveContainer" containerID="a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4" Oct 2 19:46:40.136967 env[1145]: time="2023-10-02T19:46:40.136929406Z" level=info msg="RemoveContainer for \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\"" Oct 2 19:46:40.139419 env[1145]: time="2023-10-02T19:46:40.138907489Z" level=info msg="RemoveContainer for \"a9d12387059979fd9b56c6a883d81d29ad385d8c774b5a4a6893279f40dfdbb4\" returns successfully" Oct 2 19:46:40.144576 systemd[1]: Removed slice kubepods-burstable-pod50f8cf99_6fc8_4911_8d52_f292c9c5ec4c.slice. Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174141 1444 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:40.174436 kubelet[1444]: E1002 19:46:40.174194 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: E1002 19:46:40.174205 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: E1002 19:46:40.174211 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174227 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174233 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174238 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: E1002 19:46:40.174276 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: E1002 19:46:40.174283 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174295 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.174436 kubelet[1444]: I1002 19:46:40.174300 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="50f8cf99-6fc8-4911-8d52-f292c9c5ec4c" containerName="mount-cgroup" Oct 2 19:46:40.179428 systemd[1]: Created slice kubepods-burstable-podc6291c66_0400_4daa_8b7e_fc81f6cd3f2b.slice. Oct 2 19:46:40.190233 kubelet[1444]: I1002 19:46:40.190185 1444 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-clustermesh-secrets\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190233 kubelet[1444]: I1002 19:46:40.190227 1444 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-xtables-lock\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190233 kubelet[1444]: I1002 19:46:40.190239 1444 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-bpf-maps\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190249 1444 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hubble-tls\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190258 1444 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-run\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190267 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-kernel\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190278 1444 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cni-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190288 1444 reconciler.go:399] "Volume detached for volume \"kube-api-access-mrv5n\" (UniqueName: \"kubernetes.io/projected/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-kube-api-access-mrv5n\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190296 1444 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-config-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190305 1444 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-cilium-cgroup\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190314 1444 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-hostproc\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190413 kubelet[1444]: I1002 19:46:40.190325 1444 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-lib-modules\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.190634 kubelet[1444]: I1002 19:46:40.190334 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c-host-proc-sys-net\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:40.290635 kubelet[1444]: I1002 19:46:40.290569 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-xtables-lock\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290635 kubelet[1444]: I1002 19:46:40.290622 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-bpf-maps\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290635 kubelet[1444]: I1002 19:46:40.290647 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290667 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hubble-tls\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290684 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-run\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290703 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-etc-cni-netd\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290722 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-lib-modules\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290741 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-clustermesh-secrets\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290843 kubelet[1444]: I1002 19:46:40.290760 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-net\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290981 kubelet[1444]: I1002 19:46:40.290777 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hostproc\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290981 kubelet[1444]: I1002 19:46:40.290797 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-cgroup\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290981 kubelet[1444]: I1002 19:46:40.290816 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cni-path\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290981 kubelet[1444]: I1002 19:46:40.290840 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-kernel\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.290981 kubelet[1444]: I1002 19:46:40.290859 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b8h6q\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-kube-api-access-b8h6q\") pod \"cilium-2mss9\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " pod="kube-system/cilium-2mss9" Oct 2 19:46:40.491560 kubelet[1444]: E1002 19:46:40.490532 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:40.491689 env[1145]: time="2023-10-02T19:46:40.491049750Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mss9,Uid:c6291c66-0400-4daa-8b7e-fc81f6cd3f2b,Namespace:kube-system,Attempt:0,}" Oct 2 19:46:40.508781 env[1145]: time="2023-10-02T19:46:40.508704133Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:46:40.508781 env[1145]: time="2023-10-02T19:46:40.508746173Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:46:40.508781 env[1145]: time="2023-10-02T19:46:40.508756813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:46:40.509012 env[1145]: time="2023-10-02T19:46:40.508885493Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3 pid=2030 runtime=io.containerd.runc.v2 Oct 2 19:46:40.521350 systemd[1]: Started cri-containerd-2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3.scope. Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.551220 kernel: audit: type=1400 audit(1696276000.545:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.551301 kernel: audit: type=1400 audit(1696276000.545:668): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.551318 kernel: audit: type=1400 audit(1696276000.545:669): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553520 kernel: audit: type=1400 audit(1696276000.545:670): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.556467 kernel: audit: type=1400 audit(1696276000.545:671): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.559547 kernel: audit: type=1400 audit(1696276000.545:672): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.564575 kernel: audit: type=1400 audit(1696276000.545:673): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.564636 kernel: audit: type=1400 audit(1696276000.545:674): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.545000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit: BPF prog-id=78 op=LOAD Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000115b38 a2=10 a3=0 items=0 ppid=2030 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:40.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333463303531393236316639336631316261316262373132316563 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001155a0 a2=3c a3=0 items=0 ppid=2030 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:40.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333463303531393236316639336631316261316262373132316563 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.548000 audit: BPF prog-id=79 op=LOAD Oct 2 19:46:40.548000 audit[2040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001158e0 a2=78 a3=0 items=0 ppid=2030 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:40.548000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333463303531393236316639336631316261316262373132316563 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.550000 audit: BPF prog-id=80 op=LOAD Oct 2 19:46:40.550000 audit[2040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000115670 a2=78 a3=0 items=0 ppid=2030 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:40.550000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333463303531393236316639336631316261316262373132316563 Oct 2 19:46:40.553000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:46:40.553000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { perfmon } for pid=2040 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit[2040]: AVC avc: denied { bpf } for pid=2040 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:40.553000 audit: BPF prog-id=81 op=LOAD Oct 2 19:46:40.553000 audit[2040]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000115b40 a2=78 a3=0 items=0 ppid=2030 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:40.553000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3237333463303531393236316639336631316261316262373132316563 Oct 2 19:46:40.576886 env[1145]: time="2023-10-02T19:46:40.576840662Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-2mss9,Uid:c6291c66-0400-4daa-8b7e-fc81f6cd3f2b,Namespace:kube-system,Attempt:0,} returns sandbox id \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:46:40.577423 kubelet[1444]: E1002 19:46:40.577396 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:40.579776 env[1145]: time="2023-10-02T19:46:40.579726506Z" level=info msg="CreateContainer within sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:46:40.589076 env[1145]: time="2023-10-02T19:46:40.589019718Z" level=info msg="CreateContainer within sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" Oct 2 19:46:40.589739 env[1145]: time="2023-10-02T19:46:40.589645279Z" level=info msg="StartContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" Oct 2 19:46:40.604705 systemd[1]: Started cri-containerd-84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a.scope. Oct 2 19:46:40.624858 systemd[1]: cri-containerd-84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a.scope: Deactivated successfully. Oct 2 19:46:40.630236 kubelet[1444]: E1002 19:46:40.630196 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:40.638556 env[1145]: time="2023-10-02T19:46:40.638483303Z" level=info msg="shim disconnected" id=84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a Oct 2 19:46:40.638556 env[1145]: time="2023-10-02T19:46:40.638550263Z" level=warning msg="cleaning up after shim disconnected" id=84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a namespace=k8s.io Oct 2 19:46:40.638556 env[1145]: time="2023-10-02T19:46:40.638559503Z" level=info msg="cleaning up dead shim" Oct 2 19:46:40.646781 env[1145]: time="2023-10-02T19:46:40.646741034Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2091 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:40.647028 env[1145]: time="2023-10-02T19:46:40.646979714Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Oct 2 19:46:40.647624 env[1145]: time="2023-10-02T19:46:40.647585835Z" level=error msg="Failed to pipe stdout of container \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" error="reading from a closed fifo" Oct 2 19:46:40.647723 env[1145]: time="2023-10-02T19:46:40.647602355Z" level=error msg="Failed to pipe stderr of container \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" error="reading from a closed fifo" Oct 2 19:46:40.649095 env[1145]: time="2023-10-02T19:46:40.649046437Z" level=error msg="StartContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:40.649295 kubelet[1444]: E1002 19:46:40.649257 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a" Oct 2 19:46:40.649626 kubelet[1444]: E1002 19:46:40.649602 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:40.649626 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:40.649626 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:46:40.649626 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b8h6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-2mss9_kube-system(c6291c66-0400-4daa-8b7e-fc81f6cd3f2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:40.649788 kubelet[1444]: E1002 19:46:40.649645 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2mss9" podUID=c6291c66-0400-4daa-8b7e-fc81f6cd3f2b Oct 2 19:46:40.724247 kubelet[1444]: E1002 19:46:40.724206 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:40.781107 kubelet[1444]: E1002 19:46:40.781031 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:40.827808 kubelet[1444]: I1002 19:46:40.827780 1444 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=50f8cf99-6fc8-4911-8d52-f292c9c5ec4c path="/var/lib/kubelet/pods/50f8cf99-6fc8-4911-8d52-f292c9c5ec4c/volumes" Oct 2 19:46:41.140553 kubelet[1444]: E1002 19:46:41.140162 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:41.142324 env[1145]: time="2023-10-02T19:46:41.142291082Z" level=info msg="CreateContainer within sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:46:41.156956 env[1145]: time="2023-10-02T19:46:41.156881941Z" level=info msg="CreateContainer within sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\"" Oct 2 19:46:41.157905 env[1145]: time="2023-10-02T19:46:41.157875782Z" level=info msg="StartContainer for \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\"" Oct 2 19:46:41.177737 systemd[1]: Started cri-containerd-ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665.scope. Oct 2 19:46:41.196205 systemd[1]: cri-containerd-ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665.scope: Deactivated successfully. Oct 2 19:46:41.204088 env[1145]: time="2023-10-02T19:46:41.203776282Z" level=info msg="shim disconnected" id=ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665 Oct 2 19:46:41.204088 env[1145]: time="2023-10-02T19:46:41.203835802Z" level=warning msg="cleaning up after shim disconnected" id=ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665 namespace=k8s.io Oct 2 19:46:41.204088 env[1145]: time="2023-10-02T19:46:41.203844482Z" level=info msg="cleaning up dead shim" Oct 2 19:46:41.213180 env[1145]: time="2023-10-02T19:46:41.213109534Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2127 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:41Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:41.213517 env[1145]: time="2023-10-02T19:46:41.213432134Z" level=error msg="copy shim log" error="read /proc/self/fd/28: file already closed" Oct 2 19:46:41.216873 env[1145]: time="2023-10-02T19:46:41.216788739Z" level=error msg="Failed to pipe stderr of container \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\"" error="reading from a closed fifo" Oct 2 19:46:41.217695 env[1145]: time="2023-10-02T19:46:41.217651020Z" level=error msg="Failed to pipe stdout of container \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\"" error="reading from a closed fifo" Oct 2 19:46:41.219874 env[1145]: time="2023-10-02T19:46:41.219812863Z" level=error msg="StartContainer for \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:41.220093 kubelet[1444]: E1002 19:46:41.220042 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665" Oct 2 19:46:41.220181 kubelet[1444]: E1002 19:46:41.220168 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:41.220181 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:41.220181 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:46:41.220181 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-b8h6q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-2mss9_kube-system(c6291c66-0400-4daa-8b7e-fc81f6cd3f2b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:41.220302 kubelet[1444]: E1002 19:46:41.220203 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-2mss9" podUID=c6291c66-0400-4daa-8b7e-fc81f6cd3f2b Oct 2 19:46:41.299281 kubelet[1444]: E1002 19:46:41.299216 1444 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:46:41.299449 kubelet[1444]: E1002 19:46:41.299295 1444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path podName:c6291c66-0400-4daa-8b7e-fc81f6cd3f2b nodeName:}" failed. No retries permitted until 2023-10-02 19:46:41.799262406 +0000 UTC m=+201.931629559 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path") pod "cilium-2mss9" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b") : configmap "cilium-config" not found Oct 2 19:46:41.781864 kubelet[1444]: E1002 19:46:41.781812 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:41.801221 kubelet[1444]: E1002 19:46:41.801033 1444 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:46:41.801221 kubelet[1444]: E1002 19:46:41.801099 1444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path podName:c6291c66-0400-4daa-8b7e-fc81f6cd3f2b nodeName:}" failed. No retries permitted until 2023-10-02 19:46:42.801084499 +0000 UTC m=+202.933451652 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path") pod "cilium-2mss9" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b") : configmap "cilium-config" not found Oct 2 19:46:41.862479 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665-rootfs.mount: Deactivated successfully. Oct 2 19:46:42.142609 kubelet[1444]: I1002 19:46:42.142451 1444 scope.go:115] "RemoveContainer" containerID="84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a" Oct 2 19:46:42.142609 kubelet[1444]: I1002 19:46:42.142566 1444 scope.go:115] "RemoveContainer" containerID="84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a" Oct 2 19:46:42.143672 env[1145]: time="2023-10-02T19:46:42.143639744Z" level=info msg="RemoveContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" Oct 2 19:46:42.144031 env[1145]: time="2023-10-02T19:46:42.144006705Z" level=info msg="RemoveContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\"" Oct 2 19:46:42.144098 env[1145]: time="2023-10-02T19:46:42.144077865Z" level=error msg="RemoveContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\" failed" error="failed to set removing state for container \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\": container is already in removing state" Oct 2 19:46:42.144199 kubelet[1444]: E1002 19:46:42.144185 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\": container is already in removing state" containerID="84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a" Oct 2 19:46:42.144296 kubelet[1444]: E1002 19:46:42.144212 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a": container is already in removing state; Skipping pod "cilium-2mss9_kube-system(c6291c66-0400-4daa-8b7e-fc81f6cd3f2b)" Oct 2 19:46:42.144296 kubelet[1444]: E1002 19:46:42.144269 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:42.144486 kubelet[1444]: E1002 19:46:42.144475 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-2mss9_kube-system(c6291c66-0400-4daa-8b7e-fc81f6cd3f2b)\"" pod="kube-system/cilium-2mss9" podUID=c6291c66-0400-4daa-8b7e-fc81f6cd3f2b Oct 2 19:46:42.145831 env[1145]: time="2023-10-02T19:46:42.145803787Z" level=info msg="RemoveContainer for \"84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a\" returns successfully" Oct 2 19:46:42.781973 kubelet[1444]: E1002 19:46:42.781938 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:42.806251 kubelet[1444]: E1002 19:46:42.806229 1444 configmap.go:197] Couldn't get configMap kube-system/cilium-config: configmap "cilium-config" not found Oct 2 19:46:42.806463 kubelet[1444]: E1002 19:46:42.806443 1444 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path podName:c6291c66-0400-4daa-8b7e-fc81f6cd3f2b nodeName:}" failed. No retries permitted until 2023-10-02 19:46:44.806425442 +0000 UTC m=+204.938792595 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "cilium-config-path" (UniqueName: "kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path") pod "cilium-2mss9" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b") : configmap "cilium-config" not found Oct 2 19:46:43.145432 env[1145]: time="2023-10-02T19:46:43.145320999Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:46:43.145432 env[1145]: time="2023-10-02T19:46:43.145377559Z" level=info msg="Container to stop \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:46:43.146693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3-shm.mount: Deactivated successfully. Oct 2 19:46:43.153044 systemd[1]: cri-containerd-2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3.scope: Deactivated successfully. Oct 2 19:46:43.152000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:46:43.159000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:46:43.173963 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3-rootfs.mount: Deactivated successfully. Oct 2 19:46:43.179237 env[1145]: time="2023-10-02T19:46:43.179178682Z" level=info msg="shim disconnected" id=2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3 Oct 2 19:46:43.179237 env[1145]: time="2023-10-02T19:46:43.179232522Z" level=warning msg="cleaning up after shim disconnected" id=2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3 namespace=k8s.io Oct 2 19:46:43.179237 env[1145]: time="2023-10-02T19:46:43.179243562Z" level=info msg="cleaning up dead shim" Oct 2 19:46:43.187127 env[1145]: time="2023-10-02T19:46:43.187084413Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2159 runtime=io.containerd.runc.v2\n" Oct 2 19:46:43.187426 env[1145]: time="2023-10-02T19:46:43.187401973Z" level=info msg="TearDown network for sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" successfully" Oct 2 19:46:43.187467 env[1145]: time="2023-10-02T19:46:43.187426813Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" returns successfully" Oct 2 19:46:43.308674 kubelet[1444]: I1002 19:46:43.308632 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-xtables-lock\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308692 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hubble-tls\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308714 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-cgroup\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308730 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cni-path\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308748 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-kernel\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308766 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-bpf-maps\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308838 kubelet[1444]: I1002 19:46:43.308794 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308816 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b8h6q\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-kube-api-access-b8h6q\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308835 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-net\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308852 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-run\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308870 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-etc-cni-netd\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308887 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-lib-modules\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.308991 kubelet[1444]: I1002 19:46:43.308908 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-clustermesh-secrets\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.309128 kubelet[1444]: I1002 19:46:43.308923 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hostproc\") pod \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\" (UID: \"c6291c66-0400-4daa-8b7e-fc81f6cd3f2b\") " Oct 2 19:46:43.309128 kubelet[1444]: I1002 19:46:43.308965 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hostproc" (OuterVolumeSpecName: "hostproc") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309128 kubelet[1444]: I1002 19:46:43.308631 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309415 kubelet[1444]: I1002 19:46:43.309230 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309415 kubelet[1444]: I1002 19:46:43.309261 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309415 kubelet[1444]: I1002 19:46:43.309278 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309415 kubelet[1444]: I1002 19:46:43.309290 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cni-path" (OuterVolumeSpecName: "cni-path") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309415 kubelet[1444]: I1002 19:46:43.309316 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309603 kubelet[1444]: I1002 19:46:43.309331 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309603 kubelet[1444]: I1002 19:46:43.309380 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309603 kubelet[1444]: I1002 19:46:43.309400 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:46:43.309603 kubelet[1444]: W1002 19:46:43.309415 1444 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:46:43.311003 kubelet[1444]: I1002 19:46:43.310976 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:46:43.312441 systemd[1]: var-lib-kubelet-pods-c6291c66\x2d0400\x2d4daa\x2d8b7e\x2dfc81f6cd3f2b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:46:43.313462 kubelet[1444]: I1002 19:46:43.313428 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:43.313891 kubelet[1444]: I1002 19:46:43.313866 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-kube-api-access-b8h6q" (OuterVolumeSpecName: "kube-api-access-b8h6q") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "kube-api-access-b8h6q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:46:43.314091 systemd[1]: var-lib-kubelet-pods-c6291c66\x2d0400\x2d4daa\x2d8b7e\x2dfc81f6cd3f2b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2db8h6q.mount: Deactivated successfully. Oct 2 19:46:43.315757 kubelet[1444]: I1002 19:46:43.315728 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" (UID: "c6291c66-0400-4daa-8b7e-fc81f6cd3f2b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:46:43.316066 systemd[1]: var-lib-kubelet-pods-c6291c66\x2d0400\x2d4daa\x2d8b7e\x2dfc81f6cd3f2b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409025 1444 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-run\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409076 1444 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-etc-cni-netd\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409095 1444 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-lib-modules\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409115 1444 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-clustermesh-secrets\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409133 1444 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hostproc\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409149 1444 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-xtables-lock\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409165 1444 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-hubble-tls\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409182 1444 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-cgroup\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.409793 kubelet[1444]: I1002 19:46:43.409198 1444 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cni-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.410434 kubelet[1444]: I1002 19:46:43.409216 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-kernel\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.410434 kubelet[1444]: I1002 19:46:43.409232 1444 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-bpf-maps\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.410434 kubelet[1444]: I1002 19:46:43.409249 1444 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-cilium-config-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.410434 kubelet[1444]: I1002 19:46:43.409264 1444 reconciler.go:399] "Volume detached for volume \"kube-api-access-b8h6q\" (UniqueName: \"kubernetes.io/projected/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-kube-api-access-b8h6q\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.410434 kubelet[1444]: I1002 19:46:43.409275 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b-host-proc-sys-net\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:46:43.743266 kubelet[1444]: W1002 19:46:43.743162 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6291c66_0400_4daa_8b7e_fc81f6cd3f2b.slice/cri-containerd-84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a.scope WatchSource:0}: container "84a6467f739504fbecb8faf42ed6ff9df1ddc8d8b2af2281a8252a44c1ff7b8a" in namespace "k8s.io": not found Oct 2 19:46:43.782459 kubelet[1444]: E1002 19:46:43.782433 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:44.147622 kubelet[1444]: I1002 19:46:44.147593 1444 scope.go:115] "RemoveContainer" containerID="ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665" Oct 2 19:46:44.152089 systemd[1]: Removed slice kubepods-burstable-podc6291c66_0400_4daa_8b7e_fc81f6cd3f2b.slice. Oct 2 19:46:44.154045 env[1145]: time="2023-10-02T19:46:44.154006975Z" level=info msg="RemoveContainer for \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\"" Oct 2 19:46:44.154840 kubelet[1444]: I1002 19:46:44.154811 1444 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:44.154987 kubelet[1444]: E1002 19:46:44.154969 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" containerName="mount-cgroup" Oct 2 19:46:44.155062 kubelet[1444]: I1002 19:46:44.155053 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" containerName="mount-cgroup" Oct 2 19:46:44.156660 env[1145]: time="2023-10-02T19:46:44.156628058Z" level=info msg="RemoveContainer for \"ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665\" returns successfully" Oct 2 19:46:44.159150 systemd[1]: Created slice kubepods-besteffort-pod72d550b3_3cce_4453_be65_50b5b87d174b.slice. Oct 2 19:46:44.313515 kubelet[1444]: I1002 19:46:44.313440 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d550b3-3cce-4453-be65-50b5b87d174b-cilium-config-path\") pod \"cilium-operator-69b677f97c-69gtg\" (UID: \"72d550b3-3cce-4453-be65-50b5b87d174b\") " pod="kube-system/cilium-operator-69b677f97c-69gtg" Oct 2 19:46:44.313515 kubelet[1444]: I1002 19:46:44.313486 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p4dm6\" (UniqueName: \"kubernetes.io/projected/72d550b3-3cce-4453-be65-50b5b87d174b-kube-api-access-p4dm6\") pod \"cilium-operator-69b677f97c-69gtg\" (UID: \"72d550b3-3cce-4453-be65-50b5b87d174b\") " pod="kube-system/cilium-operator-69b677f97c-69gtg" Oct 2 19:46:44.461492 kubelet[1444]: E1002 19:46:44.461389 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:44.462134 env[1145]: time="2023-10-02T19:46:44.462093129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-69gtg,Uid:72d550b3-3cce-4453-be65-50b5b87d174b,Namespace:kube-system,Attempt:0,}" Oct 2 19:46:44.478634 env[1145]: time="2023-10-02T19:46:44.478571390Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:46:44.478810 env[1145]: time="2023-10-02T19:46:44.478643110Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:46:44.478810 env[1145]: time="2023-10-02T19:46:44.478668470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:46:44.478879 env[1145]: time="2023-10-02T19:46:44.478818510Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0 pid=2183 runtime=io.containerd.runc.v2 Oct 2 19:46:44.496584 systemd[1]: run-containerd-runc-k8s.io-daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0-runc.KfKqHQ.mount: Deactivated successfully. Oct 2 19:46:44.498853 systemd[1]: Started cri-containerd-daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0.scope. Oct 2 19:46:44.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.513000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit: BPF prog-id=82 op=LOAD Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=2183 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:44.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646564343838326433383332333561633430653563393730323436 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=2183 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:44.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646564343838326433383332333561633430653563393730323436 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.514000 audit: BPF prog-id=83 op=LOAD Oct 2 19:46:44.514000 audit[2193]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=2183 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:44.514000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646564343838326433383332333561633430653563393730323436 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit: BPF prog-id=84 op=LOAD Oct 2 19:46:44.515000 audit[2193]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=2183 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:44.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646564343838326433383332333561633430653563393730323436 Oct 2 19:46:44.515000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:46:44.515000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { perfmon } for pid=2193 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit[2193]: AVC avc: denied { bpf } for pid=2193 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:44.515000 audit: BPF prog-id=85 op=LOAD Oct 2 19:46:44.515000 audit[2193]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=2183 pid=2193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:44.515000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6461646564343838326433383332333561633430653563393730323436 Oct 2 19:46:44.533979 env[1145]: time="2023-10-02T19:46:44.533937380Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-69gtg,Uid:72d550b3-3cce-4453-be65-50b5b87d174b,Namespace:kube-system,Attempt:0,} returns sandbox id \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\"" Oct 2 19:46:44.534930 kubelet[1444]: E1002 19:46:44.534905 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:44.535738 env[1145]: time="2023-10-02T19:46:44.535702023Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:46:44.783432 kubelet[1444]: E1002 19:46:44.783333 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:44.825601 env[1145]: time="2023-10-02T19:46:44.825561393Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:46:44.825837 env[1145]: time="2023-10-02T19:46:44.825793754Z" level=info msg="TearDown network for sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" successfully" Oct 2 19:46:44.825901 env[1145]: time="2023-10-02T19:46:44.825886874Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" returns successfully" Oct 2 19:46:44.826917 kubelet[1444]: I1002 19:46:44.826883 1444 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=c6291c66-0400-4daa-8b7e-fc81f6cd3f2b path="/var/lib/kubelet/pods/c6291c66-0400-4daa-8b7e-fc81f6cd3f2b/volumes" Oct 2 19:46:45.270728 kubelet[1444]: I1002 19:46:45.269920 1444 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:46:45.270728 kubelet[1444]: E1002 19:46:45.269970 1444 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" containerName="mount-cgroup" Oct 2 19:46:45.270728 kubelet[1444]: I1002 19:46:45.269999 1444 memory_manager.go:345] "RemoveStaleState removing state" podUID="c6291c66-0400-4daa-8b7e-fc81f6cd3f2b" containerName="mount-cgroup" Oct 2 19:46:45.284121 systemd[1]: Created slice kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice. Oct 2 19:46:45.419824 kubelet[1444]: I1002 19:46:45.419758 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-etc-cni-netd\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.419824 kubelet[1444]: I1002 19:46:45.419810 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-clustermesh-secrets\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.419824 kubelet[1444]: I1002 19:46:45.419832 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8gff\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-kube-api-access-r8gff\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419855 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-run\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419878 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-bpf-maps\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419898 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cni-path\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419918 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hostproc\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419938 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-xtables-lock\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420034 kubelet[1444]: I1002 19:46:45.419956 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-ipsec-secrets\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.419990 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-net\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.420009 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-kernel\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.420027 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-cgroup\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.420049 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hubble-tls\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.420068 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-lib-modules\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.420173 kubelet[1444]: I1002 19:46:45.420086 1444 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-config-path\") pod \"cilium-7cxv8\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " pod="kube-system/cilium-7cxv8" Oct 2 19:46:45.600819 kubelet[1444]: E1002 19:46:45.600496 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:45.601540 env[1145]: time="2023-10-02T19:46:45.601476461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cxv8,Uid:0c322d12-c8ea-4043-9066-d1a85a7c83ad,Namespace:kube-system,Attempt:0,}" Oct 2 19:46:45.613770 env[1145]: time="2023-10-02T19:46:45.613718957Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:46:45.613891 env[1145]: time="2023-10-02T19:46:45.613758597Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:46:45.613891 env[1145]: time="2023-10-02T19:46:45.613768957Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:46:45.613989 env[1145]: time="2023-10-02T19:46:45.613920877Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf pid=2226 runtime=io.containerd.runc.v2 Oct 2 19:46:45.625015 systemd[1]: Started cri-containerd-4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf.scope. Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.647037 kernel: kauditd_printk_skb: 108 callbacks suppressed Oct 2 19:46:45.647088 kernel: audit: type=1400 audit(1696276005.645:705): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.651920 kernel: audit: type=1400 audit(1696276005.645:706): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.651960 kernel: audit: type=1400 audit(1696276005.645:707): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.656420 kernel: audit: type=1400 audit(1696276005.645:708): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.656472 kernel: audit: type=1400 audit(1696276005.645:709): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661084 kernel: audit: type=1400 audit(1696276005.645:710): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661123 kernel: audit: type=1400 audit(1696276005.645:711): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663395 kernel: audit: type=1400 audit(1696276005.645:712): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.645000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.668429 kernel: audit: type=1400 audit(1696276005.645:713): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.668486 kernel: audit: type=1400 audit(1696276005.646:714): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.646000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.646000 audit: BPF prog-id=86 op=LOAD Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000149b38 a2=10 a3=0 items=0 ppid=2226 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:45.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623434393331323664313634363234316237643338303964313433 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001495a0 a2=3c a3=0 items=0 ppid=2226 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:45.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623434393331323664313634363234316237643338303964313433 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.658000 audit: BPF prog-id=87 op=LOAD Oct 2 19:46:45.658000 audit[2236]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001498e0 a2=78 a3=0 items=0 ppid=2226 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:45.658000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623434393331323664313634363234316237643338303964313433 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.661000 audit: BPF prog-id=88 op=LOAD Oct 2 19:46:45.661000 audit[2236]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000149670 a2=78 a3=0 items=0 ppid=2226 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:45.661000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623434393331323664313634363234316237643338303964313433 Oct 2 19:46:45.663000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:46:45.663000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { perfmon } for pid=2236 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit[2236]: AVC avc: denied { bpf } for pid=2236 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:45.663000 audit: BPF prog-id=89 op=LOAD Oct 2 19:46:45.663000 audit[2236]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000149b40 a2=78 a3=0 items=0 ppid=2226 pid=2236 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:45.663000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3462623434393331323664313634363234316237643338303964313433 Oct 2 19:46:45.681351 env[1145]: time="2023-10-02T19:46:45.681306122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7cxv8,Uid:0c322d12-c8ea-4043-9066-d1a85a7c83ad,Namespace:kube-system,Attempt:0,} returns sandbox id \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\"" Oct 2 19:46:45.682091 kubelet[1444]: E1002 19:46:45.682073 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:45.683565 env[1145]: time="2023-10-02T19:46:45.683535085Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:46:45.692961 env[1145]: time="2023-10-02T19:46:45.692914337Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" Oct 2 19:46:45.693431 env[1145]: time="2023-10-02T19:46:45.693397658Z" level=info msg="StartContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" Oct 2 19:46:45.707203 systemd[1]: Started cri-containerd-5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb.scope. Oct 2 19:46:45.725703 systemd[1]: cri-containerd-5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb.scope: Deactivated successfully. Oct 2 19:46:45.725926 kubelet[1444]: E1002 19:46:45.725896 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:45.783967 kubelet[1444]: E1002 19:46:45.783914 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:45.806809 env[1145]: time="2023-10-02T19:46:45.806752402Z" level=info msg="shim disconnected" id=5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb Oct 2 19:46:45.806809 env[1145]: time="2023-10-02T19:46:45.806804362Z" level=warning msg="cleaning up after shim disconnected" id=5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb namespace=k8s.io Oct 2 19:46:45.806809 env[1145]: time="2023-10-02T19:46:45.806813722Z" level=info msg="cleaning up dead shim" Oct 2 19:46:45.814440 env[1145]: time="2023-10-02T19:46:45.814392772Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2284 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:45.814727 env[1145]: time="2023-10-02T19:46:45.814666932Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:46:45.814886 env[1145]: time="2023-10-02T19:46:45.814841092Z" level=error msg="Failed to pipe stdout of container \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" error="reading from a closed fifo" Oct 2 19:46:45.814932 env[1145]: time="2023-10-02T19:46:45.814872932Z" level=error msg="Failed to pipe stderr of container \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" error="reading from a closed fifo" Oct 2 19:46:45.816561 env[1145]: time="2023-10-02T19:46:45.816519574Z" level=error msg="StartContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:45.816733 kubelet[1444]: E1002 19:46:45.816710 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb" Oct 2 19:46:45.816890 kubelet[1444]: E1002 19:46:45.816866 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:45.816890 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:45.816890 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:46:45.816890 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r8gff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:45.817050 kubelet[1444]: E1002 19:46:45.816919 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:46:45.944921 env[1145]: time="2023-10-02T19:46:45.943329656Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:45.946167 env[1145]: time="2023-10-02T19:46:45.946133339Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:45.950279 env[1145]: time="2023-10-02T19:46:45.950243464Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:46:45.951385 env[1145]: time="2023-10-02T19:46:45.951338146Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:e0bfc5d64e2c86e8497f9da5fbf169dc17a08c923bc75187d41ff880cb71c12f\"" Oct 2 19:46:45.953221 env[1145]: time="2023-10-02T19:46:45.953189348Z" level=info msg="CreateContainer within sandbox \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:46:45.962604 env[1145]: time="2023-10-02T19:46:45.962569000Z" level=info msg="CreateContainer within sandbox \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\"" Oct 2 19:46:45.963079 env[1145]: time="2023-10-02T19:46:45.963030961Z" level=info msg="StartContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\"" Oct 2 19:46:45.978423 systemd[1]: Started cri-containerd-531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258.scope. Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.000000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit: BPF prog-id=90 op=LOAD Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=2183 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533316233616564373265666563636330303638356665613036666462 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=2183 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533316233616564373265666563636330303638356665613036666462 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit: BPF prog-id=91 op=LOAD Oct 2 19:46:46.001000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=2183 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533316233616564373265666563636330303638356665613036666462 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit: BPF prog-id=92 op=LOAD Oct 2 19:46:46.001000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=2183 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533316233616564373265666563636330303638356665613036666462 Oct 2 19:46:46.001000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:46:46.001000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { perfmon } for pid=2304 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit[2304]: AVC avc: denied { bpf } for pid=2304 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:46:46.001000 audit: BPF prog-id=93 op=LOAD Oct 2 19:46:46.001000 audit[2304]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=2183 pid=2304 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:46:46.001000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3533316233616564373265666563636330303638356665613036666462 Oct 2 19:46:46.012842 env[1145]: time="2023-10-02T19:46:46.012802864Z" level=info msg="StartContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" returns successfully" Oct 2 19:46:46.078000 audit[2314]: AVC avc: denied { map_create } for pid=2314 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c317,c681 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c317,c681 tclass=bpf permissive=0 Oct 2 19:46:46.078000 audit[2314]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400068f768 a2=48 a3=0 items=0 ppid=2183 pid=2314 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c317,c681 key=(null) Oct 2 19:46:46.078000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:46:46.153659 kubelet[1444]: E1002 19:46:46.153633 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:46.154820 kubelet[1444]: E1002 19:46:46.154797 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:46.156653 env[1145]: time="2023-10-02T19:46:46.156618046Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:46:46.166675 env[1145]: time="2023-10-02T19:46:46.166633458Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" Oct 2 19:46:46.167443 env[1145]: time="2023-10-02T19:46:46.167390059Z" level=info msg="StartContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" Oct 2 19:46:46.184877 systemd[1]: Started cri-containerd-a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83.scope. Oct 2 19:46:46.203903 systemd[1]: cri-containerd-a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83.scope: Deactivated successfully. Oct 2 19:46:46.262813 env[1145]: time="2023-10-02T19:46:46.262759500Z" level=info msg="shim disconnected" id=a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83 Oct 2 19:46:46.262813 env[1145]: time="2023-10-02T19:46:46.262811660Z" level=warning msg="cleaning up after shim disconnected" id=a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83 namespace=k8s.io Oct 2 19:46:46.262813 env[1145]: time="2023-10-02T19:46:46.262822060Z" level=info msg="cleaning up dead shim" Oct 2 19:46:46.270469 env[1145]: time="2023-10-02T19:46:46.270416349Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:46:46Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2359 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:46:46Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:46:46.270737 env[1145]: time="2023-10-02T19:46:46.270678070Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:46:46.270908 env[1145]: time="2023-10-02T19:46:46.270865270Z" level=error msg="Failed to pipe stderr of container \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" error="reading from a closed fifo" Oct 2 19:46:46.270979 env[1145]: time="2023-10-02T19:46:46.270948550Z" level=error msg="Failed to pipe stdout of container \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" error="reading from a closed fifo" Oct 2 19:46:46.272429 env[1145]: time="2023-10-02T19:46:46.272384952Z" level=error msg="StartContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:46:46.272651 kubelet[1444]: E1002 19:46:46.272625 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83" Oct 2 19:46:46.273106 kubelet[1444]: E1002 19:46:46.272910 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:46:46.273106 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:46:46.273106 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:46:46.273106 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r8gff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:46:46.273264 kubelet[1444]: E1002 19:46:46.272964 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:46:46.784450 kubelet[1444]: E1002 19:46:46.784410 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:46.855390 kubelet[1444]: W1002 19:46:46.855304 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc6291c66_0400_4daa_8b7e_fc81f6cd3f2b.slice/cri-containerd-ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665.scope WatchSource:0}: container "ce660fc8f893da6a29e84c5e2cdf03fb334f77c01a0fbb3ae93119d2dd0e9665" in namespace "k8s.io": not found Oct 2 19:46:47.158247 kubelet[1444]: I1002 19:46:47.157743 1444 scope.go:115] "RemoveContainer" containerID="5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb" Oct 2 19:46:47.158247 kubelet[1444]: I1002 19:46:47.157975 1444 scope.go:115] "RemoveContainer" containerID="5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb" Oct 2 19:46:47.158408 kubelet[1444]: E1002 19:46:47.158396 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:47.159229 env[1145]: time="2023-10-02T19:46:47.159184072Z" level=info msg="RemoveContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" Oct 2 19:46:47.160027 env[1145]: time="2023-10-02T19:46:47.159854233Z" level=info msg="RemoveContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\"" Oct 2 19:46:47.160111 env[1145]: time="2023-10-02T19:46:47.160080153Z" level=error msg="RemoveContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\" failed" error="failed to set removing state for container \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\": container is already in removing state" Oct 2 19:46:47.160282 kubelet[1444]: E1002 19:46:47.160264 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\": container is already in removing state" containerID="5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb" Oct 2 19:46:47.160329 kubelet[1444]: E1002 19:46:47.160299 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb": container is already in removing state; Skipping pod "cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)" Oct 2 19:46:47.160371 kubelet[1444]: E1002 19:46:47.160349 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:47.160566 kubelet[1444]: E1002 19:46:47.160547 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:46:47.161827 env[1145]: time="2023-10-02T19:46:47.161796475Z" level=info msg="RemoveContainer for \"5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb\" returns successfully" Oct 2 19:46:47.785059 kubelet[1444]: E1002 19:46:47.785022 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:48.160808 kubelet[1444]: E1002 19:46:48.160598 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:46:48.160808 kubelet[1444]: E1002 19:46:48.160793 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:46:48.785833 kubelet[1444]: E1002 19:46:48.785789 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:49.786545 kubelet[1444]: E1002 19:46:49.786485 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:49.965513 kubelet[1444]: W1002 19:46:49.965473 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice/cri-containerd-5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb.scope WatchSource:0}: container "5d09974fdb5e6f703ff2b4665962ec3952d5c3f12c9d11dfcd3db9c53bafa3eb" in namespace "k8s.io": not found Oct 2 19:46:50.726728 kubelet[1444]: E1002 19:46:50.726692 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:50.787220 kubelet[1444]: E1002 19:46:50.787184 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:51.788587 kubelet[1444]: E1002 19:46:51.788545 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:52.788761 kubelet[1444]: E1002 19:46:52.788675 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:53.074345 kubelet[1444]: W1002 19:46:53.074110 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice/cri-containerd-a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83.scope WatchSource:0}: task a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83 not found: not found Oct 2 19:46:53.789057 kubelet[1444]: E1002 19:46:53.789000 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:54.790046 kubelet[1444]: E1002 19:46:54.790001 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:55.727270 kubelet[1444]: E1002 19:46:55.727231 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:46:55.791097 kubelet[1444]: E1002 19:46:55.791048 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:56.791593 kubelet[1444]: E1002 19:46:56.791536 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:57.792002 kubelet[1444]: E1002 19:46:57.791939 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:58.793238 kubelet[1444]: E1002 19:46:58.793167 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:46:59.793761 kubelet[1444]: E1002 19:46:59.793706 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:00.630075 kubelet[1444]: E1002 19:47:00.630014 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:00.728558 kubelet[1444]: E1002 19:47:00.728525 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:00.794770 kubelet[1444]: E1002 19:47:00.794726 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:01.795610 kubelet[1444]: E1002 19:47:01.795561 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:01.824677 kubelet[1444]: E1002 19:47:01.824636 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:01.826824 env[1145]: time="2023-10-02T19:47:01.826787024Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:47:01.835408 env[1145]: time="2023-10-02T19:47:01.835368887Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" Oct 2 19:47:01.836031 env[1145]: time="2023-10-02T19:47:01.836005052Z" level=info msg="StartContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" Oct 2 19:47:01.852678 systemd[1]: Started cri-containerd-0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066.scope. Oct 2 19:47:01.870341 systemd[1]: cri-containerd-0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066.scope: Deactivated successfully. Oct 2 19:47:01.873373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066-rootfs.mount: Deactivated successfully. Oct 2 19:47:01.879959 env[1145]: time="2023-10-02T19:47:01.879912894Z" level=info msg="shim disconnected" id=0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066 Oct 2 19:47:01.880162 env[1145]: time="2023-10-02T19:47:01.880143696Z" level=warning msg="cleaning up after shim disconnected" id=0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066 namespace=k8s.io Oct 2 19:47:01.880228 env[1145]: time="2023-10-02T19:47:01.880216336Z" level=info msg="cleaning up dead shim" Oct 2 19:47:01.888933 env[1145]: time="2023-10-02T19:47:01.888896320Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2398 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:47:01.889303 env[1145]: time="2023-10-02T19:47:01.889246843Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:47:01.889517 env[1145]: time="2023-10-02T19:47:01.889455644Z" level=error msg="Failed to pipe stdout of container \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" error="reading from a closed fifo" Oct 2 19:47:01.889646 env[1145]: time="2023-10-02T19:47:01.889613605Z" level=error msg="Failed to pipe stderr of container \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" error="reading from a closed fifo" Oct 2 19:47:01.891181 env[1145]: time="2023-10-02T19:47:01.891139297Z" level=error msg="StartContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:47:01.891452 kubelet[1444]: E1002 19:47:01.891382 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066" Oct 2 19:47:01.891602 kubelet[1444]: E1002 19:47:01.891583 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:47:01.891602 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:47:01.891602 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:47:01.891602 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r8gff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:47:01.891754 kubelet[1444]: E1002 19:47:01.891628 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:02.180105 kubelet[1444]: I1002 19:47:02.179412 1444 scope.go:115] "RemoveContainer" containerID="a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83" Oct 2 19:47:02.180454 kubelet[1444]: I1002 19:47:02.180425 1444 scope.go:115] "RemoveContainer" containerID="a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83" Oct 2 19:47:02.181664 env[1145]: time="2023-10-02T19:47:02.181629814Z" level=info msg="RemoveContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" Oct 2 19:47:02.182717 env[1145]: time="2023-10-02T19:47:02.181850976Z" level=info msg="RemoveContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\"" Oct 2 19:47:02.182960 env[1145]: time="2023-10-02T19:47:02.182922103Z" level=error msg="RemoveContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\" failed" error="failed to set removing state for container \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\": container is already in removing state" Oct 2 19:47:02.183285 kubelet[1444]: E1002 19:47:02.183259 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\": container is already in removing state" containerID="a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83" Oct 2 19:47:02.183285 kubelet[1444]: E1002 19:47:02.183289 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83": container is already in removing state; Skipping pod "cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)" Oct 2 19:47:02.183408 kubelet[1444]: E1002 19:47:02.183360 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:02.183633 kubelet[1444]: E1002 19:47:02.183574 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:02.185122 env[1145]: time="2023-10-02T19:47:02.185091319Z" level=info msg="RemoveContainer for \"a3a4574a3d35f6b0fe0fda129a146d716438349df657bce334d6528a0436ed83\" returns successfully" Oct 2 19:47:02.796139 kubelet[1444]: E1002 19:47:02.796099 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:03.796951 kubelet[1444]: E1002 19:47:03.796907 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:04.797538 kubelet[1444]: E1002 19:47:04.797483 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:04.984900 kubelet[1444]: W1002 19:47:04.984858 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice/cri-containerd-0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066.scope WatchSource:0}: task 0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066 not found: not found Oct 2 19:47:05.729466 kubelet[1444]: E1002 19:47:05.729430 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:05.797883 kubelet[1444]: E1002 19:47:05.797851 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:06.798574 kubelet[1444]: E1002 19:47:06.798534 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:07.799250 kubelet[1444]: E1002 19:47:07.799199 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:08.429011 update_engine[1134]: I1002 19:47:08.428937 1134 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:47:08.429011 update_engine[1134]: I1002 19:47:08.428993 1134 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:47:08.429593 update_engine[1134]: I1002 19:47:08.429514 1134 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:47:08.429823 update_engine[1134]: I1002 19:47:08.429805 1134 omaha_request_params.cc:62] Current group set to lts Oct 2 19:47:08.430049 update_engine[1134]: I1002 19:47:08.429934 1134 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:47:08.430049 update_engine[1134]: I1002 19:47:08.429940 1134 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:47:08.430049 update_engine[1134]: I1002 19:47:08.429955 1134 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:47:08.430049 update_engine[1134]: I1002 19:47:08.429975 1134 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:47:08.430343 update_engine[1134]: I1002 19:47:08.430327 1134 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:47:08.430343 update_engine[1134]: I1002 19:47:08.430341 1134 omaha_request_action.cc:269] Request: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430343 update_engine[1134]: Oct 2 19:47:08.430579 update_engine[1134]: I1002 19:47:08.430346 1134 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:47:08.430604 locksmithd[1171]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:47:08.431342 update_engine[1134]: I1002 19:47:08.431314 1134 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:47:08.431492 update_engine[1134]: I1002 19:47:08.431474 1134 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:47:08.800093 kubelet[1444]: E1002 19:47:08.800047 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:09.590119 update_engine[1134]: I1002 19:47:09.590056 1134 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:47:09.590440 update_engine[1134]: I1002 19:47:09.590306 1134 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:47:09.590528 update_engine[1134]: I1002 19:47:09.590477 1134 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:47:09.800716 kubelet[1444]: E1002 19:47:09.800653 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:09.901307 update_engine[1134]: I1002 19:47:09.900949 1134 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:47:09.902587 update_engine[1134]: I1002 19:47:09.902555 1134 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:47:09.902587 update_engine[1134]: I1002 19:47:09.902580 1134 omaha_request_action.cc:619] Omaha request response: Oct 2 19:47:09.902587 update_engine[1134]: Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910702 1134 omaha_request_action.cc:409] No update. Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910732 1134 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910737 1134 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910739 1134 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910742 1134 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:47:09.910740 update_engine[1134]: I1002 19:47:09.910745 1134 update_attempter.cc:302] Processing Done. Oct 2 19:47:09.911069 update_engine[1134]: I1002 19:47:09.910758 1134 update_attempter.cc:338] No update. Oct 2 19:47:09.911069 update_engine[1134]: I1002 19:47:09.910768 1134 update_check_scheduler.cc:74] Next update check in 42m45s Oct 2 19:47:09.911114 locksmithd[1171]: LastCheckedTime=1696276029 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:47:10.730849 kubelet[1444]: E1002 19:47:10.730790 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:10.801664 kubelet[1444]: E1002 19:47:10.801606 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:11.802721 kubelet[1444]: E1002 19:47:11.802641 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:12.803531 kubelet[1444]: E1002 19:47:12.803471 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:13.804492 kubelet[1444]: E1002 19:47:13.804437 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:14.805232 kubelet[1444]: E1002 19:47:14.805170 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:15.732378 kubelet[1444]: E1002 19:47:15.732337 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:15.806006 kubelet[1444]: E1002 19:47:15.805962 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:15.824527 kubelet[1444]: E1002 19:47:15.824458 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:15.824813 kubelet[1444]: E1002 19:47:15.824783 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:16.806152 kubelet[1444]: E1002 19:47:16.806090 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:17.806613 kubelet[1444]: E1002 19:47:17.806558 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:18.807360 kubelet[1444]: E1002 19:47:18.807289 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:19.807759 kubelet[1444]: E1002 19:47:19.807683 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:20.630652 kubelet[1444]: E1002 19:47:20.630607 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:20.647046 env[1145]: time="2023-10-02T19:47:20.646993661Z" level=info msg="StopPodSandbox for \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\"" Oct 2 19:47:20.647307 env[1145]: time="2023-10-02T19:47:20.647099702Z" level=info msg="TearDown network for sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" successfully" Oct 2 19:47:20.647307 env[1145]: time="2023-10-02T19:47:20.647134182Z" level=info msg="StopPodSandbox for \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" returns successfully" Oct 2 19:47:20.647453 env[1145]: time="2023-10-02T19:47:20.647422224Z" level=info msg="RemovePodSandbox for \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\"" Oct 2 19:47:20.647519 env[1145]: time="2023-10-02T19:47:20.647447944Z" level=info msg="Forcibly stopping sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\"" Oct 2 19:47:20.647555 env[1145]: time="2023-10-02T19:47:20.647523265Z" level=info msg="TearDown network for sandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" successfully" Oct 2 19:47:20.649803 env[1145]: time="2023-10-02T19:47:20.649761118Z" level=info msg="RemovePodSandbox \"244f7187305041776477be2c13e8e0c3556c4d1bf250ac5e9d804d7f247d7f5f\" returns successfully" Oct 2 19:47:20.651560 env[1145]: time="2023-10-02T19:47:20.651527808Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:47:20.651638 env[1145]: time="2023-10-02T19:47:20.651601569Z" level=info msg="TearDown network for sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" successfully" Oct 2 19:47:20.651638 env[1145]: time="2023-10-02T19:47:20.651632849Z" level=info msg="StopPodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" returns successfully" Oct 2 19:47:20.651975 env[1145]: time="2023-10-02T19:47:20.651942771Z" level=info msg="RemovePodSandbox for \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:47:20.652010 env[1145]: time="2023-10-02T19:47:20.651977571Z" level=info msg="Forcibly stopping sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\"" Oct 2 19:47:20.652053 env[1145]: time="2023-10-02T19:47:20.652038531Z" level=info msg="TearDown network for sandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" successfully" Oct 2 19:47:20.654077 env[1145]: time="2023-10-02T19:47:20.654041703Z" level=info msg="RemovePodSandbox \"2734c0519261f93f11ba1bb7121ecfa74343c87395a92a93c4eff7e0be96f3c3\" returns successfully" Oct 2 19:47:20.732977 kubelet[1444]: E1002 19:47:20.732950 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:20.808570 kubelet[1444]: E1002 19:47:20.808535 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:21.808772 kubelet[1444]: E1002 19:47:21.808712 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:22.809473 kubelet[1444]: E1002 19:47:22.809419 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:23.810233 kubelet[1444]: E1002 19:47:23.810166 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:24.810794 kubelet[1444]: E1002 19:47:24.810745 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:25.734622 kubelet[1444]: E1002 19:47:25.734590 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:25.811231 kubelet[1444]: E1002 19:47:25.811198 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:26.812192 kubelet[1444]: E1002 19:47:26.812125 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:27.812917 kubelet[1444]: E1002 19:47:27.812870 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:28.813730 kubelet[1444]: E1002 19:47:28.813677 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:29.814176 kubelet[1444]: E1002 19:47:29.814120 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:29.825679 kubelet[1444]: E1002 19:47:29.825080 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:29.827406 env[1145]: time="2023-10-02T19:47:29.827355199Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:47:29.857606 env[1145]: time="2023-10-02T19:47:29.857547360Z" level=info msg="CreateContainer within sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\"" Oct 2 19:47:29.859238 env[1145]: time="2023-10-02T19:47:29.858490046Z" level=info msg="StartContainer for \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\"" Oct 2 19:47:29.887746 systemd[1]: Started cri-containerd-7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf.scope. Oct 2 19:47:29.910726 systemd[1]: cri-containerd-7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf.scope: Deactivated successfully. Oct 2 19:47:29.919251 env[1145]: time="2023-10-02T19:47:29.919205450Z" level=info msg="shim disconnected" id=7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf Oct 2 19:47:29.919251 env[1145]: time="2023-10-02T19:47:29.919250571Z" level=warning msg="cleaning up after shim disconnected" id=7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf namespace=k8s.io Oct 2 19:47:29.919442 env[1145]: time="2023-10-02T19:47:29.919260811Z" level=info msg="cleaning up dead shim" Oct 2 19:47:29.928139 env[1145]: time="2023-10-02T19:47:29.928090298Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2440 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:47:29Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:47:29.928359 env[1145]: time="2023-10-02T19:47:29.928316819Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:47:29.930788 env[1145]: time="2023-10-02T19:47:29.930739472Z" level=error msg="Failed to pipe stderr of container \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\"" error="reading from a closed fifo" Oct 2 19:47:29.930887 env[1145]: time="2023-10-02T19:47:29.930756912Z" level=error msg="Failed to pipe stdout of container \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\"" error="reading from a closed fifo" Oct 2 19:47:29.932193 env[1145]: time="2023-10-02T19:47:29.932143640Z" level=error msg="StartContainer for \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:47:29.932967 kubelet[1444]: E1002 19:47:29.932370 1444 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf" Oct 2 19:47:29.932967 kubelet[1444]: E1002 19:47:29.932473 1444 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:47:29.932967 kubelet[1444]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:47:29.932967 kubelet[1444]: rm /hostbin/cilium-mount Oct 2 19:47:29.933279 kubelet[1444]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-r8gff,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:47:29.933345 kubelet[1444]: E1002 19:47:29.932524 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:30.227725 kubelet[1444]: I1002 19:47:30.227632 1444 scope.go:115] "RemoveContainer" containerID="0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066" Oct 2 19:47:30.227991 kubelet[1444]: I1002 19:47:30.227969 1444 scope.go:115] "RemoveContainer" containerID="0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066" Oct 2 19:47:30.229553 env[1145]: time="2023-10-02T19:47:30.229368459Z" level=info msg="RemoveContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" Oct 2 19:47:30.229822 env[1145]: time="2023-10-02T19:47:30.229627300Z" level=info msg="RemoveContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\"" Oct 2 19:47:30.229907 env[1145]: time="2023-10-02T19:47:30.229878461Z" level=error msg="RemoveContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\" failed" error="failed to set removing state for container \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\": container is already in removing state" Oct 2 19:47:30.230174 kubelet[1444]: E1002 19:47:30.230082 1444 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\": container is already in removing state" containerID="0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066" Oct 2 19:47:30.230174 kubelet[1444]: E1002 19:47:30.230114 1444 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066": container is already in removing state; Skipping pod "cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)" Oct 2 19:47:30.230535 kubelet[1444]: E1002 19:47:30.230187 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:30.230535 kubelet[1444]: E1002 19:47:30.230410 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:30.234337 env[1145]: time="2023-10-02T19:47:30.234297285Z" level=info msg="RemoveContainer for \"0ef907e90e90b613e66c6f11b9eebb9ef2d28cb9637b0d086566a87a63358066\" returns successfully" Oct 2 19:47:30.735917 kubelet[1444]: E1002 19:47:30.735875 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:30.814572 kubelet[1444]: E1002 19:47:30.814528 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:30.836376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf-rootfs.mount: Deactivated successfully. Oct 2 19:47:31.815309 kubelet[1444]: E1002 19:47:31.815250 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:32.815980 kubelet[1444]: E1002 19:47:32.815914 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:33.023805 kubelet[1444]: W1002 19:47:33.023743 1444 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice/cri-containerd-7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf.scope WatchSource:0}: task 7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf not found: not found Oct 2 19:47:33.816937 kubelet[1444]: E1002 19:47:33.816886 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:34.817774 kubelet[1444]: E1002 19:47:34.817731 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:35.737278 kubelet[1444]: E1002 19:47:35.737241 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:35.818619 kubelet[1444]: E1002 19:47:35.818564 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:36.819225 kubelet[1444]: E1002 19:47:36.819160 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:37.819417 kubelet[1444]: E1002 19:47:37.819357 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:38.819552 kubelet[1444]: E1002 19:47:38.819476 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:39.819710 kubelet[1444]: E1002 19:47:39.819661 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:40.630297 kubelet[1444]: E1002 19:47:40.630258 1444 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:40.738644 kubelet[1444]: E1002 19:47:40.738580 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:40.820374 kubelet[1444]: E1002 19:47:40.820327 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:40.824438 kubelet[1444]: E1002 19:47:40.824406 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:41.821295 kubelet[1444]: E1002 19:47:41.821241 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:41.824802 kubelet[1444]: E1002 19:47:41.824775 1444 dns.go:157] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:47:41.825007 kubelet[1444]: E1002 19:47:41.824979 1444 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-7cxv8_kube-system(0c322d12-c8ea-4043-9066-d1a85a7c83ad)\"" pod="kube-system/cilium-7cxv8" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad Oct 2 19:47:42.822288 kubelet[1444]: E1002 19:47:42.822196 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:43.822428 kubelet[1444]: E1002 19:47:43.822363 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:44.822741 kubelet[1444]: E1002 19:47:44.822689 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:45.373982 env[1145]: time="2023-10-02T19:47:45.373935214Z" level=info msg="StopPodSandbox for \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\"" Oct 2 19:47:45.374531 env[1145]: time="2023-10-02T19:47:45.374485537Z" level=info msg="Container to stop \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:47:45.375882 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf-shm.mount: Deactivated successfully. Oct 2 19:47:45.378450 env[1145]: time="2023-10-02T19:47:45.378406555Z" level=info msg="StopContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" with timeout 30 (s)" Oct 2 19:47:45.380579 env[1145]: time="2023-10-02T19:47:45.380545644Z" level=info msg="Stop container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" with signal terminated" Oct 2 19:47:45.380806 systemd[1]: cri-containerd-4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf.scope: Deactivated successfully. Oct 2 19:47:45.380000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:47:45.383424 kernel: kauditd_printk_skb: 107 callbacks suppressed Oct 2 19:47:45.383574 kernel: audit: type=1334 audit(1696276065.380:742): prog-id=86 op=UNLOAD Oct 2 19:47:45.389000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:47:45.391533 kernel: audit: type=1334 audit(1696276065.389:743): prog-id=89 op=UNLOAD Oct 2 19:47:45.399967 systemd[1]: cri-containerd-531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258.scope: Deactivated successfully. Oct 2 19:47:45.399000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:47:45.401519 kernel: audit: type=1334 audit(1696276065.399:744): prog-id=90 op=UNLOAD Oct 2 19:47:45.403079 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf-rootfs.mount: Deactivated successfully. Oct 2 19:47:45.405000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:47:45.407529 kernel: audit: type=1334 audit(1696276065.405:745): prog-id=93 op=UNLOAD Oct 2 19:47:45.417125 env[1145]: time="2023-10-02T19:47:45.417076331Z" level=info msg="shim disconnected" id=4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf Oct 2 19:47:45.417125 env[1145]: time="2023-10-02T19:47:45.417123371Z" level=warning msg="cleaning up after shim disconnected" id=4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf namespace=k8s.io Oct 2 19:47:45.417394 env[1145]: time="2023-10-02T19:47:45.417134611Z" level=info msg="cleaning up dead shim" Oct 2 19:47:45.420947 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258-rootfs.mount: Deactivated successfully. Oct 2 19:47:45.425287 env[1145]: time="2023-10-02T19:47:45.425241288Z" level=info msg="shim disconnected" id=531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258 Oct 2 19:47:45.425287 env[1145]: time="2023-10-02T19:47:45.425287208Z" level=warning msg="cleaning up after shim disconnected" id=531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258 namespace=k8s.io Oct 2 19:47:45.425481 env[1145]: time="2023-10-02T19:47:45.425296968Z" level=info msg="cleaning up dead shim" Oct 2 19:47:45.428345 env[1145]: time="2023-10-02T19:47:45.428311342Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2491 runtime=io.containerd.runc.v2\n" Oct 2 19:47:45.428674 env[1145]: time="2023-10-02T19:47:45.428648823Z" level=info msg="TearDown network for sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" successfully" Oct 2 19:47:45.428718 env[1145]: time="2023-10-02T19:47:45.428674744Z" level=info msg="StopPodSandbox for \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" returns successfully" Oct 2 19:47:45.436281 env[1145]: time="2023-10-02T19:47:45.436246858Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2504 runtime=io.containerd.runc.v2\n" Oct 2 19:47:45.437948 env[1145]: time="2023-10-02T19:47:45.437917026Z" level=info msg="StopContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" returns successfully" Oct 2 19:47:45.438422 env[1145]: time="2023-10-02T19:47:45.438391188Z" level=info msg="StopPodSandbox for \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\"" Oct 2 19:47:45.438479 env[1145]: time="2023-10-02T19:47:45.438452508Z" level=info msg="Container to stop \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:47:45.439551 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0-shm.mount: Deactivated successfully. Oct 2 19:47:45.447098 systemd[1]: cri-containerd-daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0.scope: Deactivated successfully. Oct 2 19:47:45.446000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:47:45.448605 kernel: audit: type=1334 audit(1696276065.446:746): prog-id=82 op=UNLOAD Oct 2 19:47:45.450000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:47:45.451587 kernel: audit: type=1334 audit(1696276065.450:747): prog-id=85 op=UNLOAD Oct 2 19:47:45.464757 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0-rootfs.mount: Deactivated successfully. Oct 2 19:47:45.469820 env[1145]: time="2023-10-02T19:47:45.469767051Z" level=info msg="shim disconnected" id=daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0 Oct 2 19:47:45.469820 env[1145]: time="2023-10-02T19:47:45.469817171Z" level=warning msg="cleaning up after shim disconnected" id=daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0 namespace=k8s.io Oct 2 19:47:45.469989 env[1145]: time="2023-10-02T19:47:45.469830171Z" level=info msg="cleaning up dead shim" Oct 2 19:47:45.477998 env[1145]: time="2023-10-02T19:47:45.477950248Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:47:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2535 runtime=io.containerd.runc.v2\n" Oct 2 19:47:45.478271 env[1145]: time="2023-10-02T19:47:45.478235889Z" level=info msg="TearDown network for sandbox \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" successfully" Oct 2 19:47:45.478271 env[1145]: time="2023-10-02T19:47:45.478263769Z" level=info msg="StopPodSandbox for \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" returns successfully" Oct 2 19:47:45.560216 kubelet[1444]: I1002 19:47:45.560153 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-config-path\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560216 kubelet[1444]: I1002 19:47:45.560202 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r8gff\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-kube-api-access-r8gff\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560216 kubelet[1444]: I1002 19:47:45.560223 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-net\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560243 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-lib-modules\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560262 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-etc-cni-netd\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560279 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-run\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560296 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cni-path\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560312 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-xtables-lock\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.560460 kubelet[1444]: I1002 19:47:45.560330 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hubble-tls\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: I1002 19:47:45.560345 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-bpf-maps\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: I1002 19:47:45.560362 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hostproc\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: I1002 19:47:45.560386 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-ipsec-secrets\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: I1002 19:47:45.560403 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-cgroup\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: I1002 19:47:45.560434 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-clustermesh-secrets\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561383 kubelet[1444]: W1002 19:47:45.560409 1444 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/0c322d12-c8ea-4043-9066-d1a85a7c83ad/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:47:45.561569 kubelet[1444]: I1002 19:47:45.560484 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561569 kubelet[1444]: I1002 19:47:45.560561 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561569 kubelet[1444]: I1002 19:47:45.560594 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561569 kubelet[1444]: I1002 19:47:45.560610 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561569 kubelet[1444]: I1002 19:47:45.560627 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561688 kubelet[1444]: I1002 19:47:45.560808 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561688 kubelet[1444]: I1002 19:47:45.560830 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561688 kubelet[1444]: I1002 19:47:45.560845 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561688 kubelet[1444]: I1002 19:47:45.561026 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561688 kubelet[1444]: I1002 19:47:45.561050 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.560454 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-kernel\") pod \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\" (UID: \"0c322d12-c8ea-4043-9066-d1a85a7c83ad\") " Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561115 1444 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-cgroup\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561127 1444 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-bpf-maps\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561137 1444 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hostproc\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561147 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-kernel\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561158 1444 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-host-proc-sys-net\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561168 1444 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-lib-modules\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.561803 kubelet[1444]: I1002 19:47:45.561176 1444 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cni-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.562075 kubelet[1444]: I1002 19:47:45.561186 1444 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-xtables-lock\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.562075 kubelet[1444]: I1002 19:47:45.561194 1444 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-etc-cni-netd\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.562075 kubelet[1444]: I1002 19:47:45.561203 1444 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-run\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.563542 kubelet[1444]: I1002 19:47:45.562639 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:47:45.563753 kubelet[1444]: I1002 19:47:45.563720 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:47:45.563753 kubelet[1444]: I1002 19:47:45.563740 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-kube-api-access-r8gff" (OuterVolumeSpecName: "kube-api-access-r8gff") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "kube-api-access-r8gff". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:45.564761 kubelet[1444]: I1002 19:47:45.564715 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:47:45.565667 kubelet[1444]: I1002 19:47:45.565637 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c322d12-c8ea-4043-9066-d1a85a7c83ad" (UID: "0c322d12-c8ea-4043-9066-d1a85a7c83ad"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662092 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d550b3-3cce-4453-be65-50b5b87d174b-cilium-config-path\") pod \"72d550b3-3cce-4453-be65-50b5b87d174b\" (UID: \"72d550b3-3cce-4453-be65-50b5b87d174b\") " Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662136 1444 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4dm6\" (UniqueName: \"kubernetes.io/projected/72d550b3-3cce-4453-be65-50b5b87d174b-kube-api-access-p4dm6\") pod \"72d550b3-3cce-4453-be65-50b5b87d174b\" (UID: \"72d550b3-3cce-4453-be65-50b5b87d174b\") " Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662159 1444 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-clustermesh-secrets\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662172 1444 reconciler.go:399] "Volume detached for volume \"kube-api-access-r8gff\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-kube-api-access-r8gff\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662184 1444 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-config-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662193 1444 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c322d12-c8ea-4043-9066-d1a85a7c83ad-hubble-tls\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.662242 kubelet[1444]: I1002 19:47:45.662203 1444 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0c322d12-c8ea-4043-9066-d1a85a7c83ad-cilium-ipsec-secrets\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.662538 kubelet[1444]: W1002 19:47:45.662396 1444 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/72d550b3-3cce-4453-be65-50b5b87d174b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:47:45.665131 kubelet[1444]: I1002 19:47:45.665092 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72d550b3-3cce-4453-be65-50b5b87d174b-kube-api-access-p4dm6" (OuterVolumeSpecName: "kube-api-access-p4dm6") pod "72d550b3-3cce-4453-be65-50b5b87d174b" (UID: "72d550b3-3cce-4453-be65-50b5b87d174b"). InnerVolumeSpecName "kube-api-access-p4dm6". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:47:45.666525 kubelet[1444]: I1002 19:47:45.666475 1444 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/72d550b3-3cce-4453-be65-50b5b87d174b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "72d550b3-3cce-4453-be65-50b5b87d174b" (UID: "72d550b3-3cce-4453-be65-50b5b87d174b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:47:45.739846 kubelet[1444]: E1002 19:47:45.739823 1444 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:47:45.763061 kubelet[1444]: I1002 19:47:45.763036 1444 reconciler.go:399] "Volume detached for volume \"kube-api-access-p4dm6\" (UniqueName: \"kubernetes.io/projected/72d550b3-3cce-4453-be65-50b5b87d174b-kube-api-access-p4dm6\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.763061 kubelet[1444]: I1002 19:47:45.763063 1444 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/72d550b3-3cce-4453-be65-50b5b87d174b-cilium-config-path\") on node \"10.0.0.11\" DevicePath \"\"" Oct 2 19:47:45.823668 kubelet[1444]: E1002 19:47:45.823644 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:46.254960 kubelet[1444]: I1002 19:47:46.254928 1444 scope.go:115] "RemoveContainer" containerID="7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf" Oct 2 19:47:46.256035 env[1145]: time="2023-10-02T19:47:46.255992379Z" level=info msg="RemoveContainer for \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\"" Oct 2 19:47:46.258817 env[1145]: time="2023-10-02T19:47:46.258783232Z" level=info msg="RemoveContainer for \"7bf971303cedaf0fcf11f22b4689c2347a4db260a5533381348af0774ae39ddf\" returns successfully" Oct 2 19:47:46.258940 systemd[1]: Removed slice kubepods-burstable-pod0c322d12_c8ea_4043_9066_d1a85a7c83ad.slice. Oct 2 19:47:46.259133 kubelet[1444]: I1002 19:47:46.259115 1444 scope.go:115] "RemoveContainer" containerID="531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258" Oct 2 19:47:46.260391 env[1145]: time="2023-10-02T19:47:46.260353879Z" level=info msg="RemoveContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\"" Oct 2 19:47:46.262332 systemd[1]: Removed slice kubepods-besteffort-pod72d550b3_3cce_4453_be65_50b5b87d174b.slice. Oct 2 19:47:46.262775 env[1145]: time="2023-10-02T19:47:46.262730770Z" level=info msg="RemoveContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" returns successfully" Oct 2 19:47:46.263223 kubelet[1444]: I1002 19:47:46.263204 1444 scope.go:115] "RemoveContainer" containerID="531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258" Oct 2 19:47:46.263467 env[1145]: time="2023-10-02T19:47:46.263402653Z" level=error msg="ContainerStatus for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": not found" Oct 2 19:47:46.263630 kubelet[1444]: E1002 19:47:46.263616 1444 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": not found" containerID="531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258" Oct 2 19:47:46.263680 kubelet[1444]: I1002 19:47:46.263644 1444 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258} err="failed to get container status \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": rpc error: code = NotFound desc = an error occurred when try to find container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": not found" Oct 2 19:47:46.375852 systemd[1]: var-lib-kubelet-pods-0c322d12\x2dc8ea\x2d4043\x2d9066\x2dd1a85a7c83ad-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dr8gff.mount: Deactivated successfully. Oct 2 19:47:46.375954 systemd[1]: var-lib-kubelet-pods-0c322d12\x2dc8ea\x2d4043\x2d9066\x2dd1a85a7c83ad-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:47:46.376009 systemd[1]: var-lib-kubelet-pods-0c322d12\x2dc8ea\x2d4043\x2d9066\x2dd1a85a7c83ad-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:47:46.376060 systemd[1]: var-lib-kubelet-pods-0c322d12\x2dc8ea\x2d4043\x2d9066\x2dd1a85a7c83ad-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:47:46.376105 systemd[1]: var-lib-kubelet-pods-72d550b3\x2d3cce\x2d4453\x2dbe65\x2d50b5b87d174b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dp4dm6.mount: Deactivated successfully. Oct 2 19:47:46.824292 kubelet[1444]: E1002 19:47:46.824249 1444 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:47:46.826209 env[1145]: time="2023-10-02T19:47:46.826171151Z" level=info msg="StopPodSandbox for \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\"" Oct 2 19:47:46.826428 env[1145]: time="2023-10-02T19:47:46.826272031Z" level=info msg="TearDown network for sandbox \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" successfully" Oct 2 19:47:46.826428 env[1145]: time="2023-10-02T19:47:46.826315191Z" level=info msg="StopPodSandbox for \"4bb4493126d1646241b7d3809d143601f7d464fe4b67fce61c1e87a51df01dbf\" returns successfully" Oct 2 19:47:46.826567 env[1145]: time="2023-10-02T19:47:46.826541192Z" level=info msg="StopContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" with timeout 1 (s)" Oct 2 19:47:46.826621 env[1145]: time="2023-10-02T19:47:46.826576192Z" level=error msg="StopContainer for \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": not found" Oct 2 19:47:46.826725 kubelet[1444]: E1002 19:47:46.826711 1444 remote_runtime.go:505] "StopContainer from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258\": not found" containerID="531b3aed72efeccc00685fea06fdbca222f6cd6e90a11ab4f6da8915bbbc4258" Oct 2 19:47:46.827147 kubelet[1444]: I1002 19:47:46.827128 1444 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0c322d12-c8ea-4043-9066-d1a85a7c83ad path="/var/lib/kubelet/pods/0c322d12-c8ea-4043-9066-d1a85a7c83ad/volumes" Oct 2 19:47:46.827245 env[1145]: time="2023-10-02T19:47:46.827211955Z" level=info msg="StopPodSandbox for \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\"" Oct 2 19:47:46.827311 env[1145]: time="2023-10-02T19:47:46.827278676Z" level=info msg="TearDown network for sandbox \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" successfully" Oct 2 19:47:46.827350 env[1145]: time="2023-10-02T19:47:46.827308396Z" level=info msg="StopPodSandbox for \"daded4882d383235ac40e5c9702465a056ef4afebd444919232e55c8e3c221d0\" returns successfully" Oct 2 19:47:46.827779 kubelet[1444]: I1002 19:47:46.827762 1444 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=72d550b3-3cce-4453-be65-50b5b87d174b path="/var/lib/kubelet/pods/72d550b3-3cce-4453-be65-50b5b87d174b/volumes"