Oct 2 19:30:40.787013 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Oct 2 19:30:40.787033 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP PREEMPT Mon Oct 2 17:55:37 -00 2023 Oct 2 19:30:40.787041 kernel: efi: EFI v2.70 by EDK II Oct 2 19:30:40.787047 kernel: efi: SMBIOS 3.0=0xd9260000 ACPI 2.0=0xd9240000 MEMATTR=0xda32b018 RNG=0xd9220018 MEMRESERVE=0xd9521c18 Oct 2 19:30:40.787052 kernel: random: crng init done Oct 2 19:30:40.787058 kernel: ACPI: Early table checksum verification disabled Oct 2 19:30:40.787064 kernel: ACPI: RSDP 0x00000000D9240000 000024 (v02 BOCHS ) Oct 2 19:30:40.787071 kernel: ACPI: XSDT 0x00000000D9230000 000064 (v01 BOCHS BXPC 00000001 01000013) Oct 2 19:30:40.787077 kernel: ACPI: FACP 0x00000000D91E0000 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787082 kernel: ACPI: DSDT 0x00000000D91F0000 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787088 kernel: ACPI: APIC 0x00000000D91D0000 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787094 kernel: ACPI: PPTT 0x00000000D91C0000 00009C (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787099 kernel: ACPI: GTDT 0x00000000D91B0000 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787105 kernel: ACPI: MCFG 0x00000000D91A0000 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787113 kernel: ACPI: SPCR 0x00000000D9190000 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787119 kernel: ACPI: DBG2 0x00000000D9180000 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787125 kernel: ACPI: IORT 0x00000000D9170000 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Oct 2 19:30:40.787131 kernel: ACPI: SPCR: console: pl011,mmio,0x9000000,9600 Oct 2 19:30:40.787137 kernel: NUMA: Failed to initialise from firmware Oct 2 19:30:40.787143 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:40.787149 kernel: NUMA: NODE_DATA [mem 0xdcb0a900-0xdcb0ffff] Oct 2 19:30:40.787197 kernel: Zone ranges: Oct 2 19:30:40.787206 kernel: DMA [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:40.787214 kernel: DMA32 empty Oct 2 19:30:40.787230 kernel: Normal empty Oct 2 19:30:40.787236 kernel: Movable zone start for each node Oct 2 19:30:40.787242 kernel: Early memory node ranges Oct 2 19:30:40.787247 kernel: node 0: [mem 0x0000000040000000-0x00000000d924ffff] Oct 2 19:30:40.787253 kernel: node 0: [mem 0x00000000d9250000-0x00000000d951ffff] Oct 2 19:30:40.787259 kernel: node 0: [mem 0x00000000d9520000-0x00000000dc7fffff] Oct 2 19:30:40.787265 kernel: node 0: [mem 0x00000000dc800000-0x00000000dc88ffff] Oct 2 19:30:40.787271 kernel: node 0: [mem 0x00000000dc890000-0x00000000dc89ffff] Oct 2 19:30:40.787277 kernel: node 0: [mem 0x00000000dc8a0000-0x00000000dc9bffff] Oct 2 19:30:40.787282 kernel: node 0: [mem 0x00000000dc9c0000-0x00000000dcffffff] Oct 2 19:30:40.787288 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000000dcffffff] Oct 2 19:30:40.787295 kernel: On node 0, zone DMA: 12288 pages in unavailable ranges Oct 2 19:30:40.787301 kernel: psci: probing for conduit method from ACPI. Oct 2 19:30:40.787307 kernel: psci: PSCIv1.1 detected in firmware. Oct 2 19:30:40.787312 kernel: psci: Using standard PSCI v0.2 function IDs Oct 2 19:30:40.787318 kernel: psci: Trusted OS migration not required Oct 2 19:30:40.787327 kernel: psci: SMC Calling Convention v1.1 Oct 2 19:30:40.787333 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Oct 2 19:30:40.787341 kernel: ACPI: SRAT not present Oct 2 19:30:40.787347 kernel: percpu: Embedded 29 pages/cpu s79960 r8192 d30632 u118784 Oct 2 19:30:40.787353 kernel: pcpu-alloc: s79960 r8192 d30632 u118784 alloc=29*4096 Oct 2 19:30:40.787360 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Oct 2 19:30:40.787366 kernel: Detected PIPT I-cache on CPU0 Oct 2 19:30:40.787372 kernel: CPU features: detected: GIC system register CPU interface Oct 2 19:30:40.787378 kernel: CPU features: detected: Hardware dirty bit management Oct 2 19:30:40.787385 kernel: CPU features: detected: Spectre-v4 Oct 2 19:30:40.787391 kernel: CPU features: detected: Spectre-BHB Oct 2 19:30:40.787399 kernel: CPU features: kernel page table isolation forced ON by KASLR Oct 2 19:30:40.787405 kernel: CPU features: detected: Kernel page table isolation (KPTI) Oct 2 19:30:40.787411 kernel: CPU features: detected: ARM erratum 1418040 Oct 2 19:30:40.787418 kernel: Built 1 zonelists, mobility grouping on. Total pages: 633024 Oct 2 19:30:40.787424 kernel: Policy zone: DMA Oct 2 19:30:40.787431 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:30:40.787438 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:30:40.787444 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:30:40.787450 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Oct 2 19:30:40.787457 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:30:40.787463 kernel: Memory: 2459276K/2572288K available (9792K kernel code, 2092K rwdata, 7548K rodata, 34560K init, 779K bss, 113012K reserved, 0K cma-reserved) Oct 2 19:30:40.787471 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Oct 2 19:30:40.787477 kernel: trace event string verifier disabled Oct 2 19:30:40.787484 kernel: rcu: Preemptible hierarchical RCU implementation. Oct 2 19:30:40.787490 kernel: rcu: RCU event tracing is enabled. Oct 2 19:30:40.787497 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Oct 2 19:30:40.787503 kernel: Trampoline variant of Tasks RCU enabled. Oct 2 19:30:40.787509 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:30:40.787516 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:30:40.787522 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Oct 2 19:30:40.787528 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Oct 2 19:30:40.787534 kernel: GICv3: 256 SPIs implemented Oct 2 19:30:40.787542 kernel: GICv3: 0 Extended SPIs implemented Oct 2 19:30:40.787548 kernel: GICv3: Distributor has no Range Selector support Oct 2 19:30:40.787554 kernel: Root IRQ handler: gic_handle_irq Oct 2 19:30:40.787561 kernel: GICv3: 16 PPIs implemented Oct 2 19:30:40.787567 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Oct 2 19:30:40.787573 kernel: ACPI: SRAT not present Oct 2 19:30:40.787579 kernel: ITS [mem 0x08080000-0x0809ffff] Oct 2 19:30:40.787586 kernel: ITS@0x0000000008080000: allocated 8192 Devices @400b0000 (indirect, esz 8, psz 64K, shr 1) Oct 2 19:30:40.787592 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @400c0000 (flat, esz 8, psz 64K, shr 1) Oct 2 19:30:40.787598 kernel: GICv3: using LPI property table @0x00000000400d0000 Oct 2 19:30:40.787605 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000000400e0000 Oct 2 19:30:40.787611 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:40.787618 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Oct 2 19:30:40.787627 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Oct 2 19:30:40.787634 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Oct 2 19:30:40.787640 kernel: arm-pv: using stolen time PV Oct 2 19:30:40.787647 kernel: Console: colour dummy device 80x25 Oct 2 19:30:40.791202 kernel: ACPI: Core revision 20210730 Oct 2 19:30:40.791246 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Oct 2 19:30:40.791254 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:30:40.791261 kernel: LSM: Security Framework initializing Oct 2 19:30:40.791268 kernel: SELinux: Initializing. Oct 2 19:30:40.791280 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:30:40.791287 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Oct 2 19:30:40.791294 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:30:40.791301 kernel: Platform MSI: ITS@0x8080000 domain created Oct 2 19:30:40.791307 kernel: PCI/MSI: ITS@0x8080000 domain created Oct 2 19:30:40.791314 kernel: Remapping and enabling EFI services. Oct 2 19:30:40.791321 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:30:40.791327 kernel: Detected PIPT I-cache on CPU1 Oct 2 19:30:40.791334 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Oct 2 19:30:40.791342 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000000400f0000 Oct 2 19:30:40.791349 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:40.791356 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Oct 2 19:30:40.791363 kernel: Detected PIPT I-cache on CPU2 Oct 2 19:30:40.791370 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Oct 2 19:30:40.791377 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000040100000 Oct 2 19:30:40.791383 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:40.791390 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Oct 2 19:30:40.791396 kernel: Detected PIPT I-cache on CPU3 Oct 2 19:30:40.791403 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Oct 2 19:30:40.791411 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000040110000 Oct 2 19:30:40.791418 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Oct 2 19:30:40.791424 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Oct 2 19:30:40.791431 kernel: smp: Brought up 1 node, 4 CPUs Oct 2 19:30:40.791443 kernel: SMP: Total of 4 processors activated. Oct 2 19:30:40.791452 kernel: CPU features: detected: 32-bit EL0 Support Oct 2 19:30:40.791459 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Oct 2 19:30:40.791466 kernel: CPU features: detected: Common not Private translations Oct 2 19:30:40.791473 kernel: CPU features: detected: CRC32 instructions Oct 2 19:30:40.791480 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Oct 2 19:30:40.791487 kernel: CPU features: detected: LSE atomic instructions Oct 2 19:30:40.791494 kernel: CPU features: detected: Privileged Access Never Oct 2 19:30:40.791502 kernel: CPU features: detected: RAS Extension Support Oct 2 19:30:40.791509 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Oct 2 19:30:40.791516 kernel: CPU: All CPU(s) started at EL1 Oct 2 19:30:40.791523 kernel: alternatives: patching kernel code Oct 2 19:30:40.791531 kernel: devtmpfs: initialized Oct 2 19:30:40.791538 kernel: KASLR enabled Oct 2 19:30:40.791545 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:30:40.791552 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Oct 2 19:30:40.791559 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:30:40.791566 kernel: SMBIOS 3.0.0 present. Oct 2 19:30:40.791573 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Oct 2 19:30:40.791580 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:30:40.791587 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Oct 2 19:30:40.791595 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Oct 2 19:30:40.791603 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Oct 2 19:30:40.791610 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:30:40.791617 kernel: audit: type=2000 audit(0.044:1): state=initialized audit_enabled=0 res=1 Oct 2 19:30:40.791624 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:30:40.791631 kernel: cpuidle: using governor menu Oct 2 19:30:40.791637 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Oct 2 19:30:40.791645 kernel: ASID allocator initialised with 32768 entries Oct 2 19:30:40.791652 kernel: ACPI: bus type PCI registered Oct 2 19:30:40.791659 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:30:40.791668 kernel: Serial: AMBA PL011 UART driver Oct 2 19:30:40.791674 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:30:40.791681 kernel: HugeTLB registered 32.0 MiB page size, pre-allocated 0 pages Oct 2 19:30:40.791695 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:30:40.791703 kernel: HugeTLB registered 64.0 KiB page size, pre-allocated 0 pages Oct 2 19:30:40.791710 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:30:40.791717 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Oct 2 19:30:40.791724 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:30:40.791731 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:30:40.791740 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:30:40.791747 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:30:40.791754 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:30:40.791761 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:30:40.791768 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:30:40.791775 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Oct 2 19:30:40.791782 kernel: ACPI: Interpreter enabled Oct 2 19:30:40.791789 kernel: ACPI: Using GIC for interrupt routing Oct 2 19:30:40.791796 kernel: ACPI: MCFG table detected, 1 entries Oct 2 19:30:40.791804 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Oct 2 19:30:40.791811 kernel: printk: console [ttyAMA0] enabled Oct 2 19:30:40.791818 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:30:40.793028 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:30:40.793109 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Oct 2 19:30:40.793173 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Oct 2 19:30:40.793257 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Oct 2 19:30:40.793330 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Oct 2 19:30:40.793339 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Oct 2 19:30:40.793346 kernel: PCI host bridge to bus 0000:00 Oct 2 19:30:40.793419 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Oct 2 19:30:40.793477 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Oct 2 19:30:40.793533 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Oct 2 19:30:40.793589 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:30:40.793670 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 Oct 2 19:30:40.793774 kernel: pci 0000:00:01.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:30:40.793842 kernel: pci 0000:00:01.0: reg 0x10: [io 0x0000-0x001f] Oct 2 19:30:40.793907 kernel: pci 0000:00:01.0: reg 0x14: [mem 0x10000000-0x10000fff] Oct 2 19:30:40.794010 kernel: pci 0000:00:01.0: reg 0x20: [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:30:40.794105 kernel: pci 0000:00:01.0: BAR 4: assigned [mem 0x8000000000-0x8000003fff 64bit pref] Oct 2 19:30:40.794172 kernel: pci 0000:00:01.0: BAR 1: assigned [mem 0x10000000-0x10000fff] Oct 2 19:30:40.794250 kernel: pci 0000:00:01.0: BAR 0: assigned [io 0x1000-0x101f] Oct 2 19:30:40.794309 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Oct 2 19:30:40.794361 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Oct 2 19:30:40.794429 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Oct 2 19:30:40.794438 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Oct 2 19:30:40.794445 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Oct 2 19:30:40.794452 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Oct 2 19:30:40.794462 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Oct 2 19:30:40.794468 kernel: iommu: Default domain type: Translated Oct 2 19:30:40.794475 kernel: iommu: DMA domain TLB invalidation policy: strict mode Oct 2 19:30:40.794482 kernel: vgaarb: loaded Oct 2 19:30:40.794489 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:30:40.794497 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:30:40.794504 kernel: PTP clock support registered Oct 2 19:30:40.794511 kernel: Registered efivars operations Oct 2 19:30:40.794517 kernel: clocksource: Switched to clocksource arch_sys_counter Oct 2 19:30:40.794524 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:30:40.794533 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:30:40.794540 kernel: pnp: PnP ACPI init Oct 2 19:30:40.794607 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Oct 2 19:30:40.794618 kernel: pnp: PnP ACPI: found 1 devices Oct 2 19:30:40.794625 kernel: NET: Registered PF_INET protocol family Oct 2 19:30:40.794632 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:30:40.794640 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Oct 2 19:30:40.794647 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:30:40.794656 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Oct 2 19:30:40.794664 kernel: TCP bind hash table entries: 32768 (order: 7, 524288 bytes, linear) Oct 2 19:30:40.794670 kernel: TCP: Hash tables configured (established 32768 bind 32768) Oct 2 19:30:40.794677 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:30:40.794685 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Oct 2 19:30:40.794697 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:30:40.794704 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:30:40.794711 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 counters available Oct 2 19:30:40.794720 kernel: kvm [1]: HYP mode not available Oct 2 19:30:40.794727 kernel: Initialise system trusted keyrings Oct 2 19:30:40.794734 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Oct 2 19:30:40.794741 kernel: Key type asymmetric registered Oct 2 19:30:40.794748 kernel: Asymmetric key parser 'x509' registered Oct 2 19:30:40.794755 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:30:40.794762 kernel: io scheduler mq-deadline registered Oct 2 19:30:40.794769 kernel: io scheduler kyber registered Oct 2 19:30:40.794776 kernel: io scheduler bfq registered Oct 2 19:30:40.794783 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Oct 2 19:30:40.794792 kernel: ACPI: button: Power Button [PWRB] Oct 2 19:30:40.794799 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Oct 2 19:30:40.794865 kernel: virtio-pci 0000:00:01.0: enabling device (0005 -> 0007) Oct 2 19:30:40.794875 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:30:40.794882 kernel: thunder_xcv, ver 1.0 Oct 2 19:30:40.794889 kernel: thunder_bgx, ver 1.0 Oct 2 19:30:40.794896 kernel: nicpf, ver 1.0 Oct 2 19:30:40.794902 kernel: nicvf, ver 1.0 Oct 2 19:30:40.794974 kernel: rtc-efi rtc-efi.0: registered as rtc0 Oct 2 19:30:40.795036 kernel: rtc-efi rtc-efi.0: setting system clock to 2023-10-02T19:30:40 UTC (1696275040) Oct 2 19:30:40.795045 kernel: hid: raw HID events driver (C) Jiri Kosina Oct 2 19:30:40.795052 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:30:40.795059 kernel: Segment Routing with IPv6 Oct 2 19:30:40.795065 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:30:40.795072 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:30:40.795079 kernel: Key type dns_resolver registered Oct 2 19:30:40.795086 kernel: registered taskstats version 1 Oct 2 19:30:40.795094 kernel: Loading compiled-in X.509 certificates Oct 2 19:30:40.795101 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 3a2a38edc68cb70dc60ec0223a6460557b3bb28d' Oct 2 19:30:40.795108 kernel: Key type .fscrypt registered Oct 2 19:30:40.795115 kernel: Key type fscrypt-provisioning registered Oct 2 19:30:40.795123 kernel: ima: No TPM chip found, activating TPM-bypass! Oct 2 19:30:40.795130 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:30:40.795136 kernel: ima: No architecture policies found Oct 2 19:30:40.795143 kernel: Freeing unused kernel memory: 34560K Oct 2 19:30:40.795150 kernel: Run /init as init process Oct 2 19:30:40.795158 kernel: with arguments: Oct 2 19:30:40.795166 kernel: /init Oct 2 19:30:40.795172 kernel: with environment: Oct 2 19:30:40.795179 kernel: HOME=/ Oct 2 19:30:40.795185 kernel: TERM=linux Oct 2 19:30:40.795192 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:30:40.795200 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:30:40.795210 systemd[1]: Detected virtualization kvm. Oct 2 19:30:40.795230 systemd[1]: Detected architecture arm64. Oct 2 19:30:40.795238 systemd[1]: Running in initrd. Oct 2 19:30:40.795245 systemd[1]: No hostname configured, using default hostname. Oct 2 19:30:40.795252 systemd[1]: Hostname set to . Oct 2 19:30:40.795259 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:30:40.795266 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:30:40.795273 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:30:40.795280 systemd[1]: Reached target cryptsetup.target. Oct 2 19:30:40.795290 systemd[1]: Reached target paths.target. Oct 2 19:30:40.795297 systemd[1]: Reached target slices.target. Oct 2 19:30:40.795304 systemd[1]: Reached target swap.target. Oct 2 19:30:40.795311 systemd[1]: Reached target timers.target. Oct 2 19:30:40.795318 systemd[1]: Listening on iscsid.socket. Oct 2 19:30:40.795326 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:30:40.795334 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:30:40.795343 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:30:40.795359 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:30:40.795370 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:30:40.795378 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:30:40.795386 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:30:40.795393 systemd[1]: Reached target sockets.target. Oct 2 19:30:40.795401 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:30:40.795409 systemd[1]: Finished network-cleanup.service. Oct 2 19:30:40.795417 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:30:40.795426 systemd[1]: Starting systemd-journald.service... Oct 2 19:30:40.795434 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:30:40.795441 systemd[1]: Starting systemd-resolved.service... Oct 2 19:30:40.795449 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:30:40.795456 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:30:40.795464 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:30:40.795471 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:30:40.795478 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:30:40.795486 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:30:40.795495 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:30:40.795502 kernel: audit: type=1130 audit(1696275040.785:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.795510 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:30:40.795522 systemd-journald[291]: Journal started Oct 2 19:30:40.795571 systemd-journald[291]: Runtime Journal (/run/log/journal/7add46a514244493a63f167e029c98f6) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:30:40.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.763196 systemd-modules-load[292]: Inserted module 'overlay' Oct 2 19:30:40.797132 systemd[1]: Started systemd-journald.service. Oct 2 19:30:40.797363 systemd-resolved[293]: Positive Trust Anchors: Oct 2 19:30:40.801188 kernel: Bridge firewalling registered Oct 2 19:30:40.801207 kernel: audit: type=1130 audit(1696275040.798:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.797371 systemd-resolved[293]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:30:40.797399 systemd-resolved[293]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:30:40.797852 systemd-modules-load[292]: Inserted module 'br_netfilter' Oct 2 19:30:40.809274 kernel: audit: type=1130 audit(1696275040.806:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.804589 systemd-resolved[293]: Defaulting to hostname 'linux'. Oct 2 19:30:40.805571 systemd[1]: Started systemd-resolved.service. Oct 2 19:30:40.806921 systemd[1]: Reached target nss-lookup.target. Oct 2 19:30:40.813258 kernel: SCSI subsystem initialized Oct 2 19:30:40.814099 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:30:40.815630 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:30:40.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.819251 kernel: audit: type=1130 audit(1696275040.814:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.819279 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:30:40.820344 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:30:40.821249 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:30:40.823421 systemd-modules-load[292]: Inserted module 'dm_multipath' Oct 2 19:30:40.824249 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:30:40.824000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.827136 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:30:40.828132 kernel: audit: type=1130 audit(1696275040.824:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.830675 dracut-cmdline[308]: dracut-dracut-053 Oct 2 19:30:40.833605 dracut-cmdline[308]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200 flatcar.first_boot=detected acpi=force verity.usrhash=684fe6a2259d7fb96810743ab87aaaa03d9f185b113bd6990a64d1079e5672ca Oct 2 19:30:40.836572 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:30:40.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.840243 kernel: audit: type=1130 audit(1696275040.836:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.909242 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:30:40.919238 kernel: iscsi: registered transport (tcp) Oct 2 19:30:40.934240 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:30:40.934258 kernel: QLogic iSCSI HBA Driver Oct 2 19:30:40.981565 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:30:40.981000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:40.983095 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:30:40.985462 kernel: audit: type=1130 audit(1696275040.981:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.037252 kernel: raid6: neonx8 gen() 12823 MB/s Oct 2 19:30:41.054269 kernel: raid6: neonx8 xor() 10742 MB/s Oct 2 19:30:41.071332 kernel: raid6: neonx4 gen() 13072 MB/s Oct 2 19:30:41.088263 kernel: raid6: neonx4 xor() 10746 MB/s Oct 2 19:30:41.105265 kernel: raid6: neonx2 gen() 11808 MB/s Oct 2 19:30:41.123735 kernel: raid6: neonx2 xor() 10287 MB/s Oct 2 19:30:41.139264 kernel: raid6: neonx1 gen() 10385 MB/s Oct 2 19:30:41.157734 kernel: raid6: neonx1 xor() 8737 MB/s Oct 2 19:30:41.173263 kernel: raid6: int64x8 gen() 6253 MB/s Oct 2 19:30:41.190257 kernel: raid6: int64x8 xor() 3533 MB/s Oct 2 19:30:41.207266 kernel: raid6: int64x4 gen() 6770 MB/s Oct 2 19:30:41.224248 kernel: raid6: int64x4 xor() 3765 MB/s Oct 2 19:30:41.241270 kernel: raid6: int64x2 gen() 6011 MB/s Oct 2 19:30:41.258262 kernel: raid6: int64x2 xor() 3141 MB/s Oct 2 19:30:41.275259 kernel: raid6: int64x1 gen() 4769 MB/s Oct 2 19:30:41.292454 kernel: raid6: int64x1 xor() 2590 MB/s Oct 2 19:30:41.292502 kernel: raid6: using algorithm neonx4 gen() 13072 MB/s Oct 2 19:30:41.292512 kernel: raid6: .... xor() 10746 MB/s, rmw enabled Oct 2 19:30:41.292520 kernel: raid6: using neon recovery algorithm Oct 2 19:30:41.305369 kernel: xor: measuring software checksum speed Oct 2 19:30:41.305405 kernel: 8regs : 17300 MB/sec Oct 2 19:30:41.306353 kernel: 32regs : 20755 MB/sec Oct 2 19:30:41.307489 kernel: arm64_neon : 27760 MB/sec Oct 2 19:30:41.307504 kernel: xor: using function: arm64_neon (27760 MB/sec) Oct 2 19:30:41.371264 kernel: Btrfs loaded, crc32c=crc32c-generic, zoned=no, fsverity=no Oct 2 19:30:41.390092 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:30:41.393646 kernel: audit: type=1130 audit(1696275041.390:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.393669 kernel: audit: type=1334 audit(1696275041.392:10): prog-id=7 op=LOAD Oct 2 19:30:41.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.392000 audit: BPF prog-id=7 op=LOAD Oct 2 19:30:41.393000 audit: BPF prog-id=8 op=LOAD Oct 2 19:30:41.394175 systemd[1]: Starting systemd-udevd.service... Oct 2 19:30:41.410575 systemd-udevd[490]: Using default interface naming scheme 'v252'. Oct 2 19:30:41.413969 systemd[1]: Started systemd-udevd.service. Oct 2 19:30:41.414000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.417465 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:30:41.433168 dracut-pre-trigger[504]: rd.md=0: removing MD RAID activation Oct 2 19:30:41.485741 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:30:41.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.487867 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:30:41.527421 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:30:41.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:41.562361 kernel: virtio_blk virtio1: [vda] 9289728 512-byte logical blocks (4.76 GB/4.43 GiB) Oct 2 19:30:41.577247 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:41.601349 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/vda6 scanned by (udev-worker) (538) Oct 2 19:30:41.608647 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:30:41.612527 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:30:41.614020 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:30:41.617829 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:30:41.621743 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:30:41.623605 systemd[1]: Starting disk-uuid.service... Oct 2 19:30:41.644246 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:41.648258 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:42.651245 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Oct 2 19:30:42.651643 disk-uuid[564]: The operation has completed successfully. Oct 2 19:30:42.685828 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:30:42.685928 systemd[1]: Finished disk-uuid.service. Oct 2 19:30:42.687353 systemd[1]: Starting verity-setup.service... Oct 2 19:30:42.686000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.686000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.706259 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Oct 2 19:30:42.740507 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:30:42.742693 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:30:42.745600 systemd[1]: Finished verity-setup.service. Oct 2 19:30:42.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.811244 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:30:42.813302 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:30:42.814546 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:30:42.815487 systemd[1]: Starting ignition-setup.service... Oct 2 19:30:42.816753 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:30:42.826423 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:30:42.826478 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:30:42.827227 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:30:42.838022 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:30:42.847263 systemd[1]: Finished ignition-setup.service. Oct 2 19:30:42.848805 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:30:42.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.938484 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:30:42.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.940000 audit: BPF prog-id=9 op=LOAD Oct 2 19:30:42.941808 systemd[1]: Starting systemd-networkd.service... Oct 2 19:30:42.952826 ignition[646]: Ignition 2.14.0 Oct 2 19:30:42.952836 ignition[646]: Stage: fetch-offline Oct 2 19:30:42.952878 ignition[646]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:42.952887 ignition[646]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:42.953066 ignition[646]: parsed url from cmdline: "" Oct 2 19:30:42.953069 ignition[646]: no config URL provided Oct 2 19:30:42.953074 ignition[646]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:30:42.953081 ignition[646]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:30:42.953099 ignition[646]: op(1): [started] loading QEMU firmware config module Oct 2 19:30:42.953105 ignition[646]: op(1): executing: "modprobe" "qemu_fw_cfg" Oct 2 19:30:42.968421 systemd-networkd[739]: lo: Link UP Oct 2 19:30:42.968434 systemd-networkd[739]: lo: Gained carrier Oct 2 19:30:42.968836 systemd-networkd[739]: Enumeration completed Oct 2 19:30:42.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.969887 ignition[646]: op(1): [finished] loading QEMU firmware config module Oct 2 19:30:42.969018 systemd-networkd[739]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:30:42.969338 systemd[1]: Started systemd-networkd.service. Oct 2 19:30:42.970892 systemd[1]: Reached target network.target. Oct 2 19:30:42.974643 systemd-networkd[739]: eth0: Link UP Oct 2 19:30:42.974647 systemd-networkd[739]: eth0: Gained carrier Oct 2 19:30:42.975956 systemd[1]: Starting iscsiuio.service... Oct 2 19:30:42.987167 systemd[1]: Started iscsiuio.service. Oct 2 19:30:42.987000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:42.989395 systemd[1]: Starting iscsid.service... Oct 2 19:30:42.993811 iscsid[745]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:30:42.993811 iscsid[745]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:30:42.993811 iscsid[745]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:30:42.993811 iscsid[745]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:30:42.993811 iscsid[745]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:30:42.993811 iscsid[745]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:30:42.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.001036 ignition[646]: parsing config with SHA512: deef185ab5b579fc9e52dcc5ea3092ae4a5321c5d8f5b8ea4aad9f0c1cfcafa759f7bc07467229b396c71f1d4a5ca7e982a852d0b0b86a30bac29057fe5f6b9a Oct 2 19:30:42.998322 systemd[1]: Started iscsid.service. Oct 2 19:30:43.000294 systemd-networkd[739]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:30:43.002858 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:30:43.017689 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:30:43.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.019048 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:30:43.023589 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:30:43.027126 systemd[1]: Reached target remote-fs.target. Oct 2 19:30:43.030424 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:30:43.034856 unknown[646]: fetched base config from "system" Oct 2 19:30:43.034868 unknown[646]: fetched user config from "qemu" Oct 2 19:30:43.035620 ignition[646]: fetch-offline: fetch-offline passed Oct 2 19:30:43.037169 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:30:43.035710 ignition[646]: Ignition finished successfully Oct 2 19:30:43.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.040672 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:30:43.040000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.041352 systemd[1]: ignition-fetch.service was skipped because of an unmet condition check (ConditionPathExists=!/run/ignition.json). Oct 2 19:30:43.042172 systemd[1]: Starting ignition-kargs.service... Oct 2 19:30:43.052756 ignition[760]: Ignition 2.14.0 Oct 2 19:30:43.052766 ignition[760]: Stage: kargs Oct 2 19:30:43.052885 ignition[760]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.055363 systemd[1]: Finished ignition-kargs.service. Oct 2 19:30:43.052895 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.056842 systemd[1]: Starting ignition-disks.service... Oct 2 19:30:43.053717 ignition[760]: kargs: kargs passed Oct 2 19:30:43.053763 ignition[760]: Ignition finished successfully Oct 2 19:30:43.065845 ignition[767]: Ignition 2.14.0 Oct 2 19:30:43.065856 ignition[767]: Stage: disks Oct 2 19:30:43.065962 ignition[767]: no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.068943 systemd[1]: Finished ignition-disks.service. Oct 2 19:30:43.065972 ignition[767]: no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.069690 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:30:43.067208 ignition[767]: disks: disks passed Oct 2 19:30:43.070335 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:30:43.067287 ignition[767]: Ignition finished successfully Oct 2 19:30:43.070901 systemd[1]: Reached target local-fs.target. Oct 2 19:30:43.071424 systemd[1]: Reached target sysinit.target. Oct 2 19:30:43.072499 systemd[1]: Reached target basic.target. Oct 2 19:30:43.074587 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:30:43.086351 systemd-resolved[293]: Detected conflict on linux IN A 10.0.0.12 Oct 2 19:30:43.086367 systemd-resolved[293]: Hostname conflict, changing published hostname from 'linux' to 'linux10'. Oct 2 19:30:43.091558 systemd-fsck[775]: ROOT: clean, 603/553520 files, 56011/553472 blocks Oct 2 19:30:43.120787 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:30:43.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.123308 systemd[1]: Mounting sysroot.mount... Oct 2 19:30:43.153238 kernel: EXT4-fs (vda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:30:43.153262 systemd[1]: Mounted sysroot.mount. Oct 2 19:30:43.153878 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:30:43.155985 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:30:43.157100 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:30:43.157299 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:30:43.157332 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:30:43.162621 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:30:43.165301 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:30:43.171745 initrd-setup-root[785]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:30:43.176868 initrd-setup-root[793]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:30:43.181859 initrd-setup-root[801]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:30:43.186164 initrd-setup-root[809]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:30:43.224935 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:30:43.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.226375 systemd[1]: Starting ignition-mount.service... Oct 2 19:30:43.227553 systemd[1]: Starting sysroot-boot.service... Oct 2 19:30:43.233334 bash[826]: umount: /sysroot/usr/share/oem: not mounted. Oct 2 19:30:43.244294 ignition[828]: INFO : Ignition 2.14.0 Oct 2 19:30:43.244294 ignition[828]: INFO : Stage: mount Oct 2 19:30:43.246447 ignition[828]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.246447 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.246447 ignition[828]: INFO : mount: mount passed Oct 2 19:30:43.246447 ignition[828]: INFO : Ignition finished successfully Oct 2 19:30:43.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:43.246506 systemd[1]: Finished sysroot-boot.service. Oct 2 19:30:43.247847 systemd[1]: Finished ignition-mount.service. Oct 2 19:30:43.765618 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:30:43.776245 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/vda6 scanned by mount (836) Oct 2 19:30:43.777692 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Oct 2 19:30:43.777724 kernel: BTRFS info (device vda6): using free space tree Oct 2 19:30:43.777734 kernel: BTRFS info (device vda6): has skinny extents Oct 2 19:30:43.781182 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:30:43.782562 systemd[1]: Starting ignition-files.service... Oct 2 19:30:43.799460 ignition[856]: INFO : Ignition 2.14.0 Oct 2 19:30:43.799460 ignition[856]: INFO : Stage: files Oct 2 19:30:43.800666 ignition[856]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:43.800666 ignition[856]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:43.800666 ignition[856]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:30:43.805817 ignition[856]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:30:43.805817 ignition[856]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:30:43.808733 ignition[856]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:30:43.809704 ignition[856]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:30:43.810732 ignition[856]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:30:43.810214 unknown[856]: wrote ssh authorized keys file for user: core Oct 2 19:30:43.812934 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:30:43.812934 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-arm64-v1.1.1.tgz: attempt #1 Oct 2 19:30:43.986464 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Oct 2 19:30:44.232832 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 6b5df61a53601926e4b5a9174828123d555f592165439f541bc117c68781f41c8bd30dccd52367e406d104df849bcbcfb72d9c4bafda4b045c59ce95d0ca0742 Oct 2 19:30:44.234980 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-arm64-v1.1.1.tgz" Oct 2 19:30:44.234980 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:30:44.234980 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-arm64.tar.gz: attempt #1 Oct 2 19:30:44.389243 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Oct 2 19:30:44.508630 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4c7e4541123cbd6f1d6fec1f827395cd58d65716c0998de790f965485738b6d6257c0dc46fd7f66403166c299f6d5bf9ff30b6e1ff9afbb071f17005e834518c Oct 2 19:30:44.511212 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-arm64.tar.gz" Oct 2 19:30:44.511212 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:30:44.511212 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubeadm: attempt #1 Oct 2 19:30:44.567976 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Oct 2 19:30:44.729431 systemd-networkd[739]: eth0: Gained IPv6LL Oct 2 19:30:45.003127 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 46c9f489062bdb84574703f7339d140d7e42c9c71b367cd860071108a3c1d38fabda2ef69f9c0ff88f7c80e88d38f96ab2248d4c9a6c9c60b0a4c20fd640d0db Oct 2 19:30:45.003127 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:30:45.006462 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:30:45.006462 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://storage.googleapis.com/kubernetes-release/release/v1.26.5/bin/linux/arm64/kubelet: attempt #1 Oct 2 19:30:45.040434 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Oct 2 19:30:45.863618 ignition[856]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 0e4ee1f23bf768c49d09beb13a6b5fad6efc8e3e685e7c5610188763e3af55923fb46158b5e76973a0f9a055f9b30d525b467c53415f965536adc2f04d9cf18d Oct 2 19:30:45.871518 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:30:45.871518 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:30:45.871518 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:30:45.871518 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:30:45.871518 ignition[856]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(9): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(9): op(a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(9): op(a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(9): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(b): [started] processing unit "prepare-critools.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(b): op(c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(b): op(c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(b): [finished] processing unit "prepare-critools.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(d): op(e): [started] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(d): op(e): [finished] writing unit "coreos-metadata.service" at "/sysroot/etc/systemd/system/coreos-metadata.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Oct 2 19:30:45.871518 ignition[856]: INFO : files: op(f): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:30:45.896812 ignition[856]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:30:45.896812 ignition[856]: INFO : files: op(10): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:30:45.896812 ignition[856]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:30:45.896812 ignition[856]: INFO : files: op(11): [started] setting preset to disabled for "coreos-metadata.service" Oct 2 19:30:45.896812 ignition[856]: INFO : files: op(11): op(12): [started] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:30:45.945401 ignition[856]: INFO : files: op(11): op(12): [finished] removing enablement symlink(s) for "coreos-metadata.service" Oct 2 19:30:45.946555 ignition[856]: INFO : files: op(11): [finished] setting preset to disabled for "coreos-metadata.service" Oct 2 19:30:45.946555 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:30:45.946555 ignition[856]: INFO : files: createResultFile: createFiles: op(13): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:30:45.946555 ignition[856]: INFO : files: files passed Oct 2 19:30:45.946555 ignition[856]: INFO : Ignition finished successfully Oct 2 19:30:45.955817 kernel: kauditd_printk_skb: 22 callbacks suppressed Oct 2 19:30:45.955844 kernel: audit: type=1130 audit(1696275045.948:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.947129 systemd[1]: Finished ignition-files.service. Oct 2 19:30:45.949473 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:30:45.957616 initrd-setup-root-after-ignition[880]: grep: /sysroot/usr/share/oem/oem-release: No such file or directory Oct 2 19:30:45.952891 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:30:45.964400 kernel: audit: type=1130 audit(1696275045.959:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.964431 kernel: audit: type=1131 audit(1696275045.959:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.964533 initrd-setup-root-after-ignition[883]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:30:45.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.953799 systemd[1]: Starting ignition-quench.service... Oct 2 19:30:45.969121 kernel: audit: type=1130 audit(1696275045.964:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.958824 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:30:45.958918 systemd[1]: Finished ignition-quench.service. Oct 2 19:30:45.960056 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:30:45.965183 systemd[1]: Reached target ignition-complete.target. Oct 2 19:30:45.969632 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:30:45.987266 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:30:45.987370 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:30:45.992763 kernel: audit: type=1130 audit(1696275045.988:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.992788 kernel: audit: type=1131 audit(1696275045.988:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:45.988903 systemd[1]: Reached target initrd-fs.target. Oct 2 19:30:45.993311 systemd[1]: Reached target initrd.target. Oct 2 19:30:45.994441 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:30:45.995314 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:30:46.012904 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:30:46.016258 kernel: audit: type=1130 audit(1696275046.013:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.013000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.014516 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:30:46.024880 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:30:46.025597 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:30:46.026632 systemd[1]: Stopped target timers.target. Oct 2 19:30:46.027587 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:30:46.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.027719 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:30:46.031697 kernel: audit: type=1131 audit(1696275046.028:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.028685 systemd[1]: Stopped target initrd.target. Oct 2 19:30:46.031349 systemd[1]: Stopped target basic.target. Oct 2 19:30:46.032233 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:30:46.033262 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:30:46.034235 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:30:46.035306 systemd[1]: Stopped target remote-fs.target. Oct 2 19:30:46.036525 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:30:46.037502 systemd[1]: Stopped target sysinit.target. Oct 2 19:30:46.038385 systemd[1]: Stopped target local-fs.target. Oct 2 19:30:46.039412 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:30:46.040460 systemd[1]: Stopped target swap.target. Oct 2 19:30:46.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.041321 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:30:46.045576 kernel: audit: type=1131 audit(1696275046.042:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.041434 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:30:46.042332 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:30:46.046000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.044828 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:30:46.049978 kernel: audit: type=1131 audit(1696275046.046:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.044935 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:30:46.046561 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:30:46.046833 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:30:46.049546 systemd[1]: Stopped target paths.target. Oct 2 19:30:46.050499 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:30:46.054250 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:30:46.055797 systemd[1]: Stopped target slices.target. Oct 2 19:30:46.056466 systemd[1]: Stopped target sockets.target. Oct 2 19:30:46.057517 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:30:46.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.057638 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:30:46.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.058815 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:30:46.058934 systemd[1]: Stopped ignition-files.service. Oct 2 19:30:46.062249 iscsid[745]: iscsid shutting down. Oct 2 19:30:46.061027 systemd[1]: Stopping ignition-mount.service... Oct 2 19:30:46.061887 systemd[1]: Stopping iscsid.service... Oct 2 19:30:46.063766 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:30:46.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.064347 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:30:46.064490 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:30:46.065586 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:30:46.065715 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:30:46.068266 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:30:46.068373 systemd[1]: Stopped iscsid.service. Oct 2 19:30:46.066000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.069457 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:30:46.069522 systemd[1]: Closed iscsid.socket. Oct 2 19:30:46.074000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.070249 systemd[1]: Stopping iscsiuio.service... Oct 2 19:30:46.075000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.072852 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:30:46.076859 ignition[896]: INFO : Ignition 2.14.0 Oct 2 19:30:46.076859 ignition[896]: INFO : Stage: umount Oct 2 19:30:46.076859 ignition[896]: INFO : no configs at "/usr/lib/ignition/base.d" Oct 2 19:30:46.076859 ignition[896]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/qemu" Oct 2 19:30:46.076859 ignition[896]: INFO : umount: umount passed Oct 2 19:30:46.076859 ignition[896]: INFO : Ignition finished successfully Oct 2 19:30:46.073182 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:30:46.074599 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:30:46.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.074691 systemd[1]: Stopped iscsiuio.service. Oct 2 19:30:46.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.076318 systemd[1]: Stopped target network.target. Oct 2 19:30:46.077638 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:30:46.093000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.077682 systemd[1]: Closed iscsiuio.socket. Oct 2 19:30:46.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.081703 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:30:46.095000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.083627 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:30:46.085693 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:30:46.086125 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:30:46.098000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.086304 systemd[1]: Stopped ignition-mount.service. Oct 2 19:30:46.101000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.086463 systemd-networkd[739]: eth0: DHCPv6 lease lost Oct 2 19:30:46.101000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:30:46.089595 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:30:46.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.089692 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:30:46.091025 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:30:46.091057 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:30:46.091788 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:30:46.091831 systemd[1]: Stopped ignition-disks.service. Oct 2 19:30:46.093984 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:30:46.094028 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:30:46.095045 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:30:46.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.095085 systemd[1]: Stopped ignition-setup.service. Oct 2 19:30:46.096819 systemd[1]: Stopping network-cleanup.service... Oct 2 19:30:46.098014 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:30:46.116000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:30:46.098070 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:30:46.117000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.099268 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:30:46.099308 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:30:46.101967 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:30:46.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.102523 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:30:46.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.107201 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:30:46.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.110873 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:30:46.111403 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:30:46.111499 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:30:46.116451 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:30:46.129000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.116601 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:30:46.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.117771 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:30:46.117814 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:30:46.118895 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:30:46.118927 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:30:46.120079 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:30:46.120125 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:30:46.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.122128 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:30:46.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.122174 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:30:46.124203 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:30:46.124273 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:30:46.126077 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:30:46.128233 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Oct 2 19:30:46.128390 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Oct 2 19:30:46.130707 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:30:46.130756 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:30:46.131468 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:30:46.131581 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:30:46.133631 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Oct 2 19:30:46.135192 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:30:46.135937 systemd[1]: Stopped network-cleanup.service. Oct 2 19:30:46.137697 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:30:46.137783 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:30:46.154045 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:30:46.154141 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:30:46.155425 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:30:46.156311 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:30:46.156355 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:30:46.158203 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:30:46.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:46.165906 systemd[1]: Switching root. Oct 2 19:30:46.182607 systemd-journald[291]: Journal stopped Oct 2 19:30:48.360470 systemd-journald[291]: Received SIGTERM from PID 1 (systemd). Oct 2 19:30:48.360582 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:30:48.360596 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:30:48.360607 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:30:48.360619 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:30:48.360629 kernel: SELinux: policy capability open_perms=1 Oct 2 19:30:48.360638 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:30:48.360648 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:30:48.360664 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:30:48.360673 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:30:48.360682 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:30:48.360692 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:30:48.360702 systemd[1]: Successfully loaded SELinux policy in 39.492ms. Oct 2 19:30:48.360726 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 7.641ms. Oct 2 19:30:48.360739 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:30:48.360750 systemd[1]: Detected virtualization kvm. Oct 2 19:30:48.360769 systemd[1]: Detected architecture arm64. Oct 2 19:30:48.360782 systemd[1]: Detected first boot. Oct 2 19:30:48.360792 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:30:48.360803 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:30:48.360813 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:48.360826 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:48.360838 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:48.360850 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:30:48.360860 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:30:48.360876 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:30:48.360892 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:30:48.360904 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:30:48.360915 systemd[1]: Created slice system-getty.slice. Oct 2 19:30:48.360926 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:30:48.360945 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:30:48.360956 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:30:48.360966 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:30:48.360976 systemd[1]: Created slice user.slice. Oct 2 19:30:48.360987 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:30:48.360997 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:30:48.361009 systemd[1]: Set up automount boot.automount. Oct 2 19:30:48.361019 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:30:48.361030 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:30:48.361040 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:30:48.361050 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:30:48.361061 systemd[1]: Reached target integritysetup.target. Oct 2 19:30:48.361080 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:30:48.361092 systemd[1]: Reached target remote-fs.target. Oct 2 19:30:48.361103 systemd[1]: Reached target slices.target. Oct 2 19:30:48.361114 systemd[1]: Reached target swap.target. Oct 2 19:30:48.361124 systemd[1]: Reached target torcx.target. Oct 2 19:30:48.361135 systemd[1]: Reached target veritysetup.target. Oct 2 19:30:48.361145 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:30:48.361155 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:30:48.361165 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:30:48.361176 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:30:48.361186 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:30:48.361197 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:30:48.361208 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:30:48.361222 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:30:48.361233 systemd[1]: Mounting media.mount... Oct 2 19:30:48.361244 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:30:48.361254 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:30:48.361264 systemd[1]: Mounting tmp.mount... Oct 2 19:30:48.361274 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:30:48.361285 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:30:48.361296 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:30:48.361307 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:30:48.361318 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:30:48.361328 systemd[1]: Starting modprobe@drm.service... Oct 2 19:30:48.361339 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:30:48.361349 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:30:48.361360 systemd[1]: Starting modprobe@loop.service... Oct 2 19:30:48.361370 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:30:48.361381 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:30:48.361392 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:30:48.361402 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:30:48.361413 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:30:48.361423 systemd[1]: Stopped systemd-journald.service. Oct 2 19:30:48.361433 systemd[1]: Starting systemd-journald.service... Oct 2 19:30:48.361443 kernel: fuse: init (API version 7.34) Oct 2 19:30:48.361452 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:30:48.361463 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:30:48.361473 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:30:48.361484 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:30:48.361496 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:30:48.361506 systemd[1]: Stopped verity-setup.service. Oct 2 19:30:48.361516 kernel: loop: module loaded Oct 2 19:30:48.361526 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:30:48.361536 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:30:48.361546 systemd[1]: Mounted media.mount. Oct 2 19:30:48.361556 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:30:48.361566 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:30:48.361577 systemd[1]: Mounted tmp.mount. Oct 2 19:30:48.361589 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:30:48.361599 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:30:48.361610 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:30:48.361620 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:30:48.361631 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:30:48.361641 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:30:48.361652 systemd[1]: Finished modprobe@drm.service. Oct 2 19:30:48.361667 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:30:48.361678 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:30:48.361689 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:30:48.361699 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:30:48.361711 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:30:48.361721 systemd[1]: Finished modprobe@loop.service. Oct 2 19:30:48.361731 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:30:48.362005 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:30:48.362026 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:30:48.362037 systemd[1]: Reached target network-pre.target. Oct 2 19:30:48.362051 systemd-journald[991]: Journal started Oct 2 19:30:48.362115 systemd-journald[991]: Runtime Journal (/run/log/journal/7add46a514244493a63f167e029c98f6) is 6.0M, max 48.7M, 42.6M free. Oct 2 19:30:46.275000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:30:46.444000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:30:46.444000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:30:46.444000 audit: BPF prog-id=10 op=LOAD Oct 2 19:30:46.444000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:30:46.444000 audit: BPF prog-id=11 op=LOAD Oct 2 19:30:46.444000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:30:48.220000 audit: BPF prog-id=12 op=LOAD Oct 2 19:30:48.220000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:30:48.221000 audit: BPF prog-id=13 op=LOAD Oct 2 19:30:48.222000 audit: BPF prog-id=14 op=LOAD Oct 2 19:30:48.222000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:30:48.222000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:30:48.223000 audit: BPF prog-id=15 op=LOAD Oct 2 19:30:48.223000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:30:48.225000 audit: BPF prog-id=16 op=LOAD Oct 2 19:30:48.225000 audit: BPF prog-id=17 op=LOAD Oct 2 19:30:48.225000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:30:48.225000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:30:48.226000 audit: BPF prog-id=18 op=LOAD Oct 2 19:30:48.226000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:30:48.227000 audit: BPF prog-id=19 op=LOAD Oct 2 19:30:48.227000 audit: BPF prog-id=20 op=LOAD Oct 2 19:30:48.227000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:30:48.227000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:30:48.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.247000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:30:48.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.306000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.307000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.309000 audit: BPF prog-id=21 op=LOAD Oct 2 19:30:48.309000 audit: BPF prog-id=22 op=LOAD Oct 2 19:30:48.309000 audit: BPF prog-id=23 op=LOAD Oct 2 19:30:48.310000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:30:48.310000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:30:48.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.342000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.346000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.348000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.354000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.359000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:30:48.359000 audit[991]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=4 a1=ffffc8add670 a2=4000 a3=1 items=0 ppid=1 pid=991 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:48.359000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:30:46.491661 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:48.363428 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:30:48.219471 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:30:46.492292 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:30:48.219483 systemd[1]: Unnecessary job was removed for dev-vda6.device. Oct 2 19:30:46.492311 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:30:48.228179 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:30:46.492342 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:30:46.492352 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:30:46.492381 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:30:46.492393 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:30:46.492589 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:30:46.492622 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:30:46.492633 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:30:46.493024 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:30:46.493060 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:30:46.493078 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:30:46.493092 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:30:46.493109 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:30:46.493123 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:46Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:30:47.956055 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:47.956345 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:47.956456 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:47.956614 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:30:47.956671 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:30:47.956725 /usr/lib/systemd/system-generators/torcx-generator[929]: time="2023-10-02T19:30:47Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:30:48.367517 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:30:48.369235 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:30:48.371697 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:30:48.371745 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:30:48.374557 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:30:48.375783 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:30:48.378237 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:30:48.384209 systemd[1]: Started systemd-journald.service. Oct 2 19:30:48.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.382955 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:30:48.383907 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:30:48.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.385265 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:30:48.386210 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:30:48.388181 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:30:48.395957 systemd-journald[991]: Time spent on flushing to /var/log/journal/7add46a514244493a63f167e029c98f6 is 17.318ms for 997 entries. Oct 2 19:30:48.395957 systemd-journald[991]: System Journal (/var/log/journal/7add46a514244493a63f167e029c98f6) is 8.0M, max 195.6M, 187.6M free. Oct 2 19:30:48.429856 systemd-journald[991]: Received client request to flush runtime journal. Oct 2 19:30:48.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.404000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.397984 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:30:48.404578 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:30:48.431088 udevadm[1031]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Oct 2 19:30:48.406584 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:30:48.413206 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:30:48.415154 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:30:48.430842 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:30:48.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.438575 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:30:48.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.440832 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:30:48.458840 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:30:48.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.776773 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:30:48.777000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.777000 audit: BPF prog-id=24 op=LOAD Oct 2 19:30:48.777000 audit: BPF prog-id=25 op=LOAD Oct 2 19:30:48.777000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:30:48.777000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:30:48.778784 systemd[1]: Starting systemd-udevd.service... Oct 2 19:30:48.798552 systemd-udevd[1035]: Using default interface naming scheme 'v252'. Oct 2 19:30:48.809918 systemd[1]: Started systemd-udevd.service. Oct 2 19:30:48.810000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.811000 audit: BPF prog-id=26 op=LOAD Oct 2 19:30:48.813665 systemd[1]: Starting systemd-networkd.service... Oct 2 19:30:48.823000 audit: BPF prog-id=27 op=LOAD Oct 2 19:30:48.823000 audit: BPF prog-id=28 op=LOAD Oct 2 19:30:48.823000 audit: BPF prog-id=29 op=LOAD Oct 2 19:30:48.824622 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:30:48.855477 systemd[1]: Condition check resulted in dev-ttyAMA0.device being skipped. Oct 2 19:30:48.862846 systemd[1]: Started systemd-userdbd.service. Oct 2 19:30:48.863000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.901543 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:30:48.926207 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:30:48.926000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.928382 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:30:48.938203 systemd-networkd[1042]: lo: Link UP Oct 2 19:30:48.938212 systemd-networkd[1042]: lo: Gained carrier Oct 2 19:30:48.938582 systemd-networkd[1042]: Enumeration completed Oct 2 19:30:48.938707 systemd-networkd[1042]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:30:48.938709 systemd[1]: Started systemd-networkd.service. Oct 2 19:30:48.939000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.940710 systemd-networkd[1042]: eth0: Link UP Oct 2 19:30:48.940720 systemd-networkd[1042]: eth0: Gained carrier Oct 2 19:30:48.946436 lvm[1068]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:30:48.964464 systemd-networkd[1042]: eth0: DHCPv4 address 10.0.0.12/16, gateway 10.0.0.1 acquired from 10.0.0.1 Oct 2 19:30:48.979319 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:30:48.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:48.980151 systemd[1]: Reached target cryptsetup.target. Oct 2 19:30:48.982142 systemd[1]: Starting lvm2-activation.service... Oct 2 19:30:48.986938 lvm[1069]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:30:49.024324 systemd[1]: Finished lvm2-activation.service. Oct 2 19:30:49.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.025078 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:30:49.025736 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:30:49.025769 systemd[1]: Reached target local-fs.target. Oct 2 19:30:49.026383 systemd[1]: Reached target machines.target. Oct 2 19:30:49.028321 systemd[1]: Starting ldconfig.service... Oct 2 19:30:49.034975 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:30:49.035047 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:49.037164 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:30:49.039710 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:30:49.041863 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:30:49.042723 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:30:49.042809 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:30:49.044737 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:30:49.061420 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1071 (bootctl) Oct 2 19:30:49.063489 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:30:49.071534 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:30:49.076347 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:30:49.079909 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:30:49.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.083111 systemd-tmpfiles[1074]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:30:49.175383 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:30:49.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.203193 systemd-fsck[1079]: fsck.fat 4.2 (2021-01-31) Oct 2 19:30:49.203193 systemd-fsck[1079]: /dev/vda1: 236 files, 113463/258078 clusters Oct 2 19:30:49.206150 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:30:49.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.327864 ldconfig[1070]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:30:49.335732 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:30:49.336745 systemd[1]: Finished ldconfig.service. Oct 2 19:30:49.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.339039 systemd[1]: Mounting boot.mount... Oct 2 19:30:49.348951 systemd[1]: Mounted boot.mount. Oct 2 19:30:49.358817 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:30:49.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.415005 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:30:49.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.418792 systemd[1]: Starting audit-rules.service... Oct 2 19:30:49.421944 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:30:49.428142 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:30:49.431000 audit: BPF prog-id=30 op=LOAD Oct 2 19:30:49.433722 systemd[1]: Starting systemd-resolved.service... Oct 2 19:30:49.435000 audit: BPF prog-id=31 op=LOAD Oct 2 19:30:49.436740 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:30:49.440239 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:30:49.442880 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:30:49.443000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.444866 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:30:49.449505 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:30:49.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.454199 systemd[1]: Starting systemd-update-done.service... Oct 2 19:30:49.454000 audit[1095]: SYSTEM_BOOT pid=1095 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.460785 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:30:49.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.470840 systemd[1]: Finished systemd-update-done.service. Oct 2 19:30:49.471672 augenrules[1103]: No rules Oct 2 19:30:49.471000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:30:49.471000 audit[1103]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffdae04d60 a2=420 a3=0 items=0 ppid=1082 pid=1103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:49.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-done comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:49.471000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:30:49.473049 systemd[1]: Finished audit-rules.service. Oct 2 19:30:49.489636 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:30:49.963696 systemd-timesyncd[1093]: Contacted time server 10.0.0.1:123 (10.0.0.1). Oct 2 19:30:49.963753 systemd-timesyncd[1093]: Initial clock synchronization to Mon 2023-10-02 19:30:49.963599 UTC. Oct 2 19:30:49.964087 systemd[1]: Reached target time-set.target. Oct 2 19:30:49.981454 systemd-resolved[1091]: Positive Trust Anchors: Oct 2 19:30:49.981469 systemd-resolved[1091]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:30:49.981496 systemd-resolved[1091]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:30:49.991544 systemd-resolved[1091]: Defaulting to hostname 'linux'. Oct 2 19:30:49.993053 systemd[1]: Started systemd-resolved.service. Oct 2 19:30:49.993864 systemd[1]: Reached target network.target. Oct 2 19:30:49.994622 systemd[1]: Reached target nss-lookup.target. Oct 2 19:30:49.995427 systemd[1]: Reached target sysinit.target. Oct 2 19:30:49.996168 systemd[1]: Started motdgen.path. Oct 2 19:30:49.996836 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:30:49.998177 systemd[1]: Started logrotate.timer. Oct 2 19:30:49.999709 systemd[1]: Started mdadm.timer. Oct 2 19:30:50.000193 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:30:50.000784 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:30:50.000810 systemd[1]: Reached target paths.target. Oct 2 19:30:50.001300 systemd[1]: Reached target timers.target. Oct 2 19:30:50.002343 systemd[1]: Listening on dbus.socket. Oct 2 19:30:50.004250 systemd[1]: Starting docker.socket... Oct 2 19:30:50.007777 systemd[1]: Listening on sshd.socket. Oct 2 19:30:50.008425 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:50.008890 systemd[1]: Listening on docker.socket. Oct 2 19:30:50.009527 systemd[1]: Reached target sockets.target. Oct 2 19:30:50.010134 systemd[1]: Reached target basic.target. Oct 2 19:30:50.010706 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.010724 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:30:50.011824 systemd[1]: Starting containerd.service... Oct 2 19:30:50.018657 systemd[1]: Starting dbus.service... Oct 2 19:30:50.020472 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:30:50.022434 systemd[1]: Starting extend-filesystems.service... Oct 2 19:30:50.023036 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:30:50.024659 systemd[1]: Starting motdgen.service... Oct 2 19:30:50.027898 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:30:50.030403 systemd[1]: Starting prepare-critools.service... Oct 2 19:30:50.031912 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:30:50.033415 jq[1114]: false Oct 2 19:30:50.033583 systemd[1]: Starting sshd-keygen.service... Oct 2 19:30:50.037440 systemd[1]: Starting systemd-logind.service... Oct 2 19:30:50.037999 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:30:50.038086 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Oct 2 19:30:50.038664 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:30:50.040075 systemd[1]: Starting update-engine.service... Oct 2 19:30:50.041693 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:30:50.046947 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:30:50.047126 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:30:50.048226 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:30:50.048297 jq[1130]: true Oct 2 19:30:50.048406 systemd[1]: Finished motdgen.service. Oct 2 19:30:50.050157 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:30:50.050327 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:30:50.064420 jq[1137]: true Oct 2 19:30:50.069568 tar[1134]: ./ Oct 2 19:30:50.069568 tar[1134]: ./macvlan Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda1 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda2 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda3 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found usr Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda4 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda6 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda7 Oct 2 19:30:50.077811 extend-filesystems[1115]: Found vda9 Oct 2 19:30:50.077811 extend-filesystems[1115]: Checking size of /dev/vda9 Oct 2 19:30:50.086647 tar[1135]: crictl Oct 2 19:30:50.088904 systemd-logind[1126]: Watching system buttons on /dev/input/event0 (Power Button) Oct 2 19:30:50.091023 systemd-logind[1126]: New seat seat0. Oct 2 19:30:50.113121 dbus-daemon[1113]: [system] SELinux support is enabled Oct 2 19:30:50.113324 systemd[1]: Started dbus.service. Oct 2 19:30:50.116082 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:30:50.116120 systemd[1]: Reached target system-config.target. Oct 2 19:30:50.116785 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:30:50.116805 systemd[1]: Reached target user-config.target. Oct 2 19:30:50.119683 systemd[1]: Started systemd-logind.service. Oct 2 19:30:50.119934 dbus-daemon[1113]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:30:50.121361 extend-filesystems[1115]: Old size kept for /dev/vda9 Oct 2 19:30:50.122349 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:30:50.122536 systemd[1]: Finished extend-filesystems.service. Oct 2 19:30:50.132684 bash[1164]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:30:50.134622 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:30:50.175640 update_engine[1129]: I1002 19:30:50.175361 1129 main.cc:92] Flatcar Update Engine starting Oct 2 19:30:50.178095 systemd[1]: Started update-engine.service. Oct 2 19:30:50.178214 update_engine[1129]: I1002 19:30:50.178112 1129 update_check_scheduler.cc:74] Next update check in 5m29s Oct 2 19:30:50.180575 systemd[1]: Started locksmithd.service. Oct 2 19:30:50.182083 tar[1134]: ./static Oct 2 19:30:50.207896 tar[1134]: ./vlan Oct 2 19:30:50.212188 env[1138]: time="2023-10-02T19:30:50.212136072Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:30:50.238160 tar[1134]: ./portmap Oct 2 19:30:50.258113 env[1138]: time="2023-10-02T19:30:50.258052072Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:30:50.258311 env[1138]: time="2023-10-02T19:30:50.258285152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.265095 tar[1134]: ./host-local Oct 2 19:30:50.270618 env[1138]: time="2023-10-02T19:30:50.270556112Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.270618 env[1138]: time="2023-10-02T19:30:50.270611792Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.270885 env[1138]: time="2023-10-02T19:30:50.270856472Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.270885 env[1138]: time="2023-10-02T19:30:50.270879432Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.270964 env[1138]: time="2023-10-02T19:30:50.270894112Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:30:50.270964 env[1138]: time="2023-10-02T19:30:50.270905072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.271010 env[1138]: time="2023-10-02T19:30:50.270981832Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.271455 env[1138]: time="2023-10-02T19:30:50.271431032Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:30:50.271577 env[1138]: time="2023-10-02T19:30:50.271555192Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:30:50.271577 env[1138]: time="2023-10-02T19:30:50.271573312Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:30:50.271640 env[1138]: time="2023-10-02T19:30:50.271626632Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:30:50.271640 env[1138]: time="2023-10-02T19:30:50.271637952Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:30:50.275670 env[1138]: time="2023-10-02T19:30:50.275637512Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:30:50.275745 env[1138]: time="2023-10-02T19:30:50.275673632Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:30:50.275745 env[1138]: time="2023-10-02T19:30:50.275694592Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:30:50.275745 env[1138]: time="2023-10-02T19:30:50.275727592Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.275745 env[1138]: time="2023-10-02T19:30:50.275742152Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.275827 env[1138]: time="2023-10-02T19:30:50.275756072Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.275827 env[1138]: time="2023-10-02T19:30:50.275768912Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276125 env[1138]: time="2023-10-02T19:30:50.276101872Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276162 env[1138]: time="2023-10-02T19:30:50.276126752Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276162 env[1138]: time="2023-10-02T19:30:50.276142072Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276162 env[1138]: time="2023-10-02T19:30:50.276154952Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276221 env[1138]: time="2023-10-02T19:30:50.276167832Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:30:50.276311 env[1138]: time="2023-10-02T19:30:50.276291432Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:30:50.276396 env[1138]: time="2023-10-02T19:30:50.276370912Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:30:50.276643 env[1138]: time="2023-10-02T19:30:50.276622312Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:30:50.276682 env[1138]: time="2023-10-02T19:30:50.276651032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.276682 env[1138]: time="2023-10-02T19:30:50.276664952Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.276952 env[1138]: time="2023-10-02T19:30:50.276934312Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.276952 env[1138]: time="2023-10-02T19:30:50.276951792Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277021 env[1138]: time="2023-10-02T19:30:50.276965032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277021 env[1138]: time="2023-10-02T19:30:50.276982712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277021 env[1138]: time="2023-10-02T19:30:50.276994912Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277021 env[1138]: time="2023-10-02T19:30:50.277006832Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277021 env[1138]: time="2023-10-02T19:30:50.277017872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277121 env[1138]: time="2023-10-02T19:30:50.277029472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277121 env[1138]: time="2023-10-02T19:30:50.277043112Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.277183 env[1138]: time="2023-10-02T19:30:50.277162032Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277217 env[1138]: time="2023-10-02T19:30:50.277182752Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277217 env[1138]: time="2023-10-02T19:30:50.277195992Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277217 env[1138]: time="2023-10-02T19:30:50.277207952Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:30:50.277285 env[1138]: time="2023-10-02T19:30:50.277221672Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:30:50.277285 env[1138]: time="2023-10-02T19:30:50.277232832Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:30:50.277285 env[1138]: time="2023-10-02T19:30:50.277249952Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:30:50.277285 env[1138]: time="2023-10-02T19:30:50.277283632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:30:50.277550 env[1138]: time="2023-10-02T19:30:50.277492272Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.277554152Z" level=info msg="Connect containerd service" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.277588352Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.278513472Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.279031992Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.279084512Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280074392Z" level=info msg="containerd successfully booted in 0.070655s" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280731352Z" level=info msg="Start subscribing containerd event" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280765272Z" level=info msg="Start recovering state" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280838752Z" level=info msg="Start event monitor" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280878352Z" level=info msg="Start snapshots syncer" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280888592Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:30:50.284633 env[1138]: time="2023-10-02T19:30:50.280898752Z" level=info msg="Start streaming server" Oct 2 19:30:50.279226 systemd[1]: Started containerd.service. Oct 2 19:30:50.292905 tar[1134]: ./vrf Oct 2 19:30:50.325057 tar[1134]: ./bridge Oct 2 19:30:50.364320 tar[1134]: ./tuning Oct 2 19:30:50.393791 tar[1134]: ./firewall Oct 2 19:30:50.436649 tar[1134]: ./host-device Oct 2 19:30:50.451532 systemd[1]: Created slice system-sshd.slice. Oct 2 19:30:50.468495 tar[1134]: ./sbr Oct 2 19:30:50.492594 tar[1134]: ./loopback Oct 2 19:30:50.527839 tar[1134]: ./dhcp Oct 2 19:30:50.553675 locksmithd[1169]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:30:50.556867 systemd[1]: Finished prepare-critools.service. Oct 2 19:30:50.601786 tar[1134]: ./ptp Oct 2 19:30:50.629909 tar[1134]: ./ipvlan Oct 2 19:30:50.641602 systemd-networkd[1042]: eth0: Gained IPv6LL Oct 2 19:30:50.658367 tar[1134]: ./bandwidth Oct 2 19:30:50.694260 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:30:51.620180 sshd_keygen[1139]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:30:51.641298 systemd[1]: Finished sshd-keygen.service. Oct 2 19:30:51.643689 systemd[1]: Starting issuegen.service... Oct 2 19:30:51.645480 systemd[1]: Started sshd@0-10.0.0.12:22-10.0.0.1:47988.service. Oct 2 19:30:51.649720 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:30:51.649898 systemd[1]: Finished issuegen.service. Oct 2 19:30:51.652144 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:30:51.661541 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:30:51.664654 systemd[1]: Started getty@tty1.service. Oct 2 19:30:51.667520 systemd[1]: Started serial-getty@ttyAMA0.service. Oct 2 19:30:51.668549 systemd[1]: Reached target getty.target. Oct 2 19:30:51.669221 systemd[1]: Reached target multi-user.target. Oct 2 19:30:51.671461 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:30:51.679720 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:30:51.680009 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:30:51.681019 systemd[1]: Startup finished in 621ms (kernel) + 5.644s (initrd) + 4.980s (userspace) = 11.246s. Oct 2 19:30:51.717156 sshd[1190]: Accepted publickey for core from 10.0.0.1 port 47988 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:51.719618 sshd[1190]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:51.737482 systemd-logind[1126]: New session 1 of user core. Oct 2 19:30:51.738442 systemd[1]: Created slice user-500.slice. Oct 2 19:30:51.739740 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:30:51.749058 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:30:51.750619 systemd[1]: Starting user@500.service... Oct 2 19:30:51.754046 (systemd)[1199]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:51.828762 systemd[1199]: Queued start job for default target default.target. Oct 2 19:30:51.829274 systemd[1199]: Reached target paths.target. Oct 2 19:30:51.829293 systemd[1199]: Reached target sockets.target. Oct 2 19:30:51.829304 systemd[1199]: Reached target timers.target. Oct 2 19:30:51.829313 systemd[1199]: Reached target basic.target. Oct 2 19:30:51.829366 systemd[1199]: Reached target default.target. Oct 2 19:30:51.829412 systemd[1199]: Startup finished in 68ms. Oct 2 19:30:51.829517 systemd[1]: Started user@500.service. Oct 2 19:30:51.830481 systemd[1]: Started session-1.scope. Oct 2 19:30:51.884049 systemd[1]: Started sshd@1-10.0.0.12:22-10.0.0.1:47996.service. Oct 2 19:30:51.933052 sshd[1208]: Accepted publickey for core from 10.0.0.1 port 47996 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:51.934740 sshd[1208]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:51.938202 systemd-logind[1126]: New session 2 of user core. Oct 2 19:30:51.939053 systemd[1]: Started session-2.scope. Oct 2 19:30:51.996821 sshd[1208]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.001018 systemd[1]: Started sshd@2-10.0.0.12:22-10.0.0.1:48012.service. Oct 2 19:30:52.001482 systemd[1]: sshd@1-10.0.0.12:22-10.0.0.1:47996.service: Deactivated successfully. Oct 2 19:30:52.002155 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:30:52.002798 systemd-logind[1126]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:30:52.003821 systemd-logind[1126]: Removed session 2. Oct 2 19:30:52.045057 sshd[1213]: Accepted publickey for core from 10.0.0.1 port 48012 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.046760 sshd[1213]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.050345 systemd-logind[1126]: New session 3 of user core. Oct 2 19:30:52.051211 systemd[1]: Started session-3.scope. Oct 2 19:30:52.102294 sshd[1213]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.106221 systemd[1]: sshd@2-10.0.0.12:22-10.0.0.1:48012.service: Deactivated successfully. Oct 2 19:30:52.106992 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:30:52.107504 systemd-logind[1126]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:30:52.108555 systemd[1]: Started sshd@3-10.0.0.12:22-10.0.0.1:48022.service. Oct 2 19:30:52.109176 systemd-logind[1126]: Removed session 3. Oct 2 19:30:52.153185 sshd[1221]: Accepted publickey for core from 10.0.0.1 port 48022 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.154893 sshd[1221]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.158721 systemd-logind[1126]: New session 4 of user core. Oct 2 19:30:52.159555 systemd[1]: Started session-4.scope. Oct 2 19:30:52.214162 sshd[1221]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.217104 systemd[1]: sshd@3-10.0.0.12:22-10.0.0.1:48022.service: Deactivated successfully. Oct 2 19:30:52.217874 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:30:52.218367 systemd-logind[1126]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:30:52.219439 systemd[1]: Started sshd@4-10.0.0.12:22-10.0.0.1:48034.service. Oct 2 19:30:52.220093 systemd-logind[1126]: Removed session 4. Oct 2 19:30:52.264128 sshd[1227]: Accepted publickey for core from 10.0.0.1 port 48034 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.265870 sshd[1227]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.269912 systemd-logind[1126]: New session 5 of user core. Oct 2 19:30:52.270760 systemd[1]: Started session-5.scope. Oct 2 19:30:52.354900 sudo[1230]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:30:52.355099 sudo[1230]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.373582 dbus-daemon[1113]: avc: received setenforce notice (enforcing=1) Oct 2 19:30:52.374500 sudo[1230]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.376625 sshd[1227]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.380106 systemd[1]: sshd@4-10.0.0.12:22-10.0.0.1:48034.service: Deactivated successfully. Oct 2 19:30:52.380451 systemd-logind[1126]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:30:52.380728 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:30:52.382324 systemd[1]: Started sshd@5-10.0.0.12:22-10.0.0.1:48038.service. Oct 2 19:30:52.382787 systemd-logind[1126]: Removed session 5. Oct 2 19:30:52.428106 sshd[1234]: Accepted publickey for core from 10.0.0.1 port 48038 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.429975 sshd[1234]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.433197 systemd-logind[1126]: New session 6 of user core. Oct 2 19:30:52.434034 systemd[1]: Started session-6.scope. Oct 2 19:30:52.488296 sudo[1238]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:30:52.488522 sudo[1238]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.491408 sudo[1238]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.496300 sudo[1237]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:30:52.496766 sudo[1237]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.506579 systemd[1]: Stopping audit-rules.service... Oct 2 19:30:52.506000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:30:52.508609 kernel: kauditd_printk_skb: 129 callbacks suppressed Oct 2 19:30:52.508675 kernel: audit: type=1305 audit(1696275052.506:168): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:30:52.506000 audit[1241]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe5afe2c0 a2=420 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.510349 auditctl[1241]: No rules Oct 2 19:30:52.510572 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:30:52.510740 systemd[1]: Stopped audit-rules.service. Oct 2 19:30:52.512257 systemd[1]: Starting audit-rules.service... Oct 2 19:30:52.513359 kernel: audit: type=1300 audit(1696275052.506:168): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe5afe2c0 a2=420 a3=0 items=0 ppid=1 pid=1241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.506000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:30:52.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.516241 kernel: audit: type=1327 audit(1696275052.506:168): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:30:52.516276 kernel: audit: type=1131 audit(1696275052.509:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.532599 augenrules[1258]: No rules Oct 2 19:30:52.533372 systemd[1]: Finished audit-rules.service. Oct 2 19:30:52.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.534306 sudo[1237]: pam_unix(sudo:session): session closed for user root Oct 2 19:30:52.532000 audit[1237]: USER_END pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.536916 sshd[1234]: pam_unix(sshd:session): session closed for user core Oct 2 19:30:52.538321 kernel: audit: type=1130 audit(1696275052.532:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.538355 kernel: audit: type=1106 audit(1696275052.532:171): pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.538371 kernel: audit: type=1104 audit(1696275052.533:172): pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.533000 audit[1237]: CRED_DISP pid=1237 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:48050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.540672 systemd[1]: Started sshd@6-10.0.0.12:22-10.0.0.1:48050.service. Oct 2 19:30:52.543032 systemd[1]: sshd@5-10.0.0.12:22-10.0.0.1:48038.service: Deactivated successfully. Oct 2 19:30:52.543111 kernel: audit: type=1130 audit(1696275052.539:173): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:48050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.543155 kernel: audit: type=1106 audit(1696275052.539:174): pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.539000 audit[1234]: USER_END pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.543754 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:30:52.544795 systemd-logind[1126]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:30:52.545663 systemd-logind[1126]: Removed session 6. Oct 2 19:30:52.539000 audit[1234]: CRED_DISP pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.548614 kernel: audit: type=1104 audit(1696275052.539:175): pid=1234 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.541000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.0.0.12:22-10.0.0.1:48038 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.584000 audit[1263]: USER_ACCT pid=1263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.586198 sshd[1263]: Accepted publickey for core from 10.0.0.1 port 48050 ssh2: RSA SHA256:R3OEPhWkv1tFTzpShOfiax8deu3eER597BueAu9DxLo Oct 2 19:30:52.586000 audit[1263]: CRED_ACQ pid=1263 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.586000 audit[1263]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=5 a1=ffffd85cfe80 a2=3 a3=1 items=0 ppid=1 pid=1263 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:52.586000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:30:52.587845 sshd[1263]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:30:52.591661 systemd-logind[1126]: New session 7 of user core. Oct 2 19:30:52.592086 systemd[1]: Started session-7.scope. Oct 2 19:30:52.594000 audit[1263]: USER_START pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.596000 audit[1266]: CRED_ACQ pid=1266 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:30:52.644000 audit[1267]: USER_ACCT pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.646160 sudo[1267]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:30:52.644000 audit[1267]: CRED_REFR pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:52.646364 sudo[1267]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:30:52.646000 audit[1267]: USER_START pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.179002 systemd[1]: Reloading. Oct 2 19:30:53.224363 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T19:30:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:53.225569 /usr/lib/systemd/system-generators/torcx-generator[1297]: time="2023-10-02T19:30:53Z" level=info msg="torcx already run" Oct 2 19:30:53.292622 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:53.292641 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:53.309747 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit: BPF prog-id=37 op=LOAD Oct 2 19:30:53.353000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit: BPF prog-id=38 op=LOAD Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit: BPF prog-id=39 op=LOAD Oct 2 19:30:53.353000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:30:53.353000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit: BPF prog-id=40 op=LOAD Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.353000 audit: BPF prog-id=41 op=LOAD Oct 2 19:30:53.353000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:30:53.353000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.354000 audit: BPF prog-id=42 op=LOAD Oct 2 19:30:53.354000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.355000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit: BPF prog-id=43 op=LOAD Oct 2 19:30:53.356000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit: BPF prog-id=44 op=LOAD Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit: BPF prog-id=45 op=LOAD Oct 2 19:30:53.356000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:30:53.356000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.356000 audit: BPF prog-id=46 op=LOAD Oct 2 19:30:53.356000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit: BPF prog-id=47 op=LOAD Oct 2 19:30:53.357000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit: BPF prog-id=48 op=LOAD Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.357000 audit: BPF prog-id=49 op=LOAD Oct 2 19:30:53.357000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:30:53.357000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.359000 audit: BPF prog-id=50 op=LOAD Oct 2 19:30:53.359000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:53.360000 audit: BPF prog-id=51 op=LOAD Oct 2 19:30:53.360000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:30:53.368223 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:30:53.376378 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:30:53.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.376810 systemd[1]: Reached target network-online.target. Oct 2 19:30:53.378260 systemd[1]: Started kubelet.service. Oct 2 19:30:53.377000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.390028 systemd[1]: Starting coreos-metadata.service... Oct 2 19:30:53.398492 systemd[1]: coreos-metadata.service: Deactivated successfully. Oct 2 19:30:53.398665 systemd[1]: Finished coreos-metadata.service. Oct 2 19:30:53.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.680317 kubelet[1335]: E1002 19:30:53.680239 1335 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Oct 2 19:30:53.684115 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:30:53.684239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:30:53.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:30:53.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:53.838566 systemd[1]: Stopped kubelet.service. Oct 2 19:30:53.856335 systemd[1]: Reloading. Oct 2 19:30:53.907399 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:30:53Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:30:53.907426 /usr/lib/systemd/system-generators/torcx-generator[1404]: time="2023-10-02T19:30:53Z" level=info msg="torcx already run" Oct 2 19:30:53.970616 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:30:53.970636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:30:53.988330 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit: BPF prog-id=52 op=LOAD Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit: BPF prog-id=53 op=LOAD Oct 2 19:30:54.032000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:30:54.032000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit: BPF prog-id=54 op=LOAD Oct 2 19:30:54.032000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.032000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit: BPF prog-id=55 op=LOAD Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit: BPF prog-id=56 op=LOAD Oct 2 19:30:54.033000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:30:54.033000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.033000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.034000 audit: BPF prog-id=57 op=LOAD Oct 2 19:30:54.034000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit: BPF prog-id=58 op=LOAD Oct 2 19:30:54.035000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit: BPF prog-id=59 op=LOAD Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.035000 audit: BPF prog-id=60 op=LOAD Oct 2 19:30:54.035000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:30:54.035000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit: BPF prog-id=61 op=LOAD Oct 2 19:30:54.036000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit: BPF prog-id=62 op=LOAD Oct 2 19:30:54.037000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit: BPF prog-id=63 op=LOAD Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.037000 audit: BPF prog-id=64 op=LOAD Oct 2 19:30:54.037000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:30:54.037000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.038000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit: BPF prog-id=65 op=LOAD Oct 2 19:30:54.039000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:54.039000 audit: BPF prog-id=66 op=LOAD Oct 2 19:30:54.039000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:30:54.052664 systemd[1]: Started kubelet.service. Oct 2 19:30:54.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:30:54.107649 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:30:54.107649 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:30:54.107649 kubelet[1440]: I1002 19:30:54.102069 1440 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:30:54.107649 kubelet[1440]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:30:54.107649 kubelet[1440]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:30:55.100491 kubelet[1440]: I1002 19:30:55.100443 1440 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Oct 2 19:30:55.100491 kubelet[1440]: I1002 19:30:55.100474 1440 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:30:55.100698 kubelet[1440]: I1002 19:30:55.100677 1440 server.go:836] "Client rotation is on, will bootstrap in background" Oct 2 19:30:55.105724 kubelet[1440]: I1002 19:30:55.105681 1440 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:30:55.107857 kubelet[1440]: W1002 19:30:55.107826 1440 machine.go:65] Cannot read vendor id correctly, set empty. Oct 2 19:30:55.108659 kubelet[1440]: I1002 19:30:55.108633 1440 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:30:55.109167 kubelet[1440]: I1002 19:30:55.109141 1440 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:30:55.109242 kubelet[1440]: I1002 19:30:55.109212 1440 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Oct 2 19:30:55.109324 kubelet[1440]: I1002 19:30:55.109294 1440 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:30:55.109324 kubelet[1440]: I1002 19:30:55.109305 1440 container_manager_linux.go:308] "Creating device plugin manager" Oct 2 19:30:55.109516 kubelet[1440]: I1002 19:30:55.109488 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:30:55.116142 kubelet[1440]: I1002 19:30:55.116107 1440 kubelet.go:398] "Attempting to sync node with API server" Oct 2 19:30:55.116142 kubelet[1440]: I1002 19:30:55.116134 1440 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:30:55.116265 kubelet[1440]: I1002 19:30:55.116231 1440 kubelet.go:297] "Adding apiserver pod source" Oct 2 19:30:55.116265 kubelet[1440]: I1002 19:30:55.116244 1440 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:30:55.116524 kubelet[1440]: E1002 19:30:55.116502 1440 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.116735 kubelet[1440]: E1002 19:30:55.116642 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.117815 kubelet[1440]: I1002 19:30:55.117782 1440 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:30:55.118874 kubelet[1440]: W1002 19:30:55.118774 1440 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:30:55.119673 kubelet[1440]: I1002 19:30:55.119437 1440 server.go:1186] "Started kubelet" Oct 2 19:30:55.119918 kubelet[1440]: I1002 19:30:55.119898 1440 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:30:55.120844 kubelet[1440]: E1002 19:30:55.120820 1440 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:30:55.120907 kubelet[1440]: E1002 19:30:55.120850 1440 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:30:55.119000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.119000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.119000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=40003c7800 a1=4000243b60 a2=40003c77d0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.119000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.119000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.119000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.119000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=4000130da0 a1=4000243b78 a2=40003c7890 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.119000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.121264 kubelet[1440]: I1002 19:30:55.121052 1440 kubelet.go:1341] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:30:55.121264 kubelet[1440]: I1002 19:30:55.121095 1440 kubelet.go:1345] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:30:55.121264 kubelet[1440]: I1002 19:30:55.121150 1440 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:30:55.121550 kubelet[1440]: I1002 19:30:55.121533 1440 server.go:451] "Adding debug handlers to kubelet server" Oct 2 19:30:55.124564 kubelet[1440]: E1002 19:30:55.123630 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:30:55.124564 kubelet[1440]: I1002 19:30:55.123782 1440 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:30:55.124564 kubelet[1440]: I1002 19:30:55.123861 1440 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Oct 2 19:30:55.136436 kubelet[1440]: E1002 19:30:55.136397 1440 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:30:55.136656 kubelet[1440]: E1002 19:30:55.136402 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce5bfb1d0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 119413712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 119413712, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.137491 kubelet[1440]: W1002 19:30:55.137101 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:55.137606 kubelet[1440]: E1002 19:30:55.137592 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:55.139350 kubelet[1440]: E1002 19:30:55.139228 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce5d567c8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 120836552, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 120836552, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.139565 kubelet[1440]: W1002 19:30:55.139538 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:55.139565 kubelet[1440]: E1002 19:30:55.139567 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:55.139685 kubelet[1440]: W1002 19:30:55.139636 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:55.139685 kubelet[1440]: E1002 19:30:55.139648 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:55.144739 kubelet[1440]: I1002 19:30:55.144719 1440 cpu_manager.go:214] "Starting CPU manager" policy="none" Oct 2 19:30:55.144739 kubelet[1440]: I1002 19:30:55.144737 1440 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Oct 2 19:30:55.144858 kubelet[1440]: I1002 19:30:55.144754 1440 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:30:55.145165 kubelet[1440]: E1002 19:30:55.145052 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.145894 kubelet[1440]: E1002 19:30:55.145819 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.146584 kubelet[1440]: E1002 19:30:55.146522 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.160868 kubelet[1440]: I1002 19:30:55.160812 1440 policy_none.go:49] "None policy: Start" Oct 2 19:30:55.161619 kubelet[1440]: I1002 19:30:55.161554 1440 memory_manager.go:169] "Starting memorymanager" policy="None" Oct 2 19:30:55.161619 kubelet[1440]: I1002 19:30:55.161587 1440 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:30:55.169235 systemd[1]: Created slice kubepods.slice. Oct 2 19:30:55.170000 audit[1457]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1457 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.170000 audit[1457]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffee47e5f0 a2=0 a3=1 items=0 ppid=1440 pid=1457 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.170000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:30:55.173629 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:30:55.172000 audit[1459]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1459 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.172000 audit[1459]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=ffffeecd17c0 a2=0 a3=1 items=0 ppid=1440 pid=1459 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.172000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:30:55.176237 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:30:55.186201 kubelet[1440]: I1002 19:30:55.186173 1440 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:30:55.184000 audit[1440]: AVC avc: denied { mac_admin } for pid=1440 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:30:55.184000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:30:55.184000 audit[1440]: SYSCALL arch=c00000b7 syscall=5 success=no exit=-22 a0=400124ccf0 a1=4000d4f398 a2=400124ccc0 a3=25 items=0 ppid=1 pid=1440 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.184000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:30:55.186460 kubelet[1440]: I1002 19:30:55.186240 1440 server.go:88] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:30:55.186460 kubelet[1440]: I1002 19:30:55.186454 1440 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:30:55.187422 kubelet[1440]: E1002 19:30:55.187403 1440 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:30:55.189231 kubelet[1440]: E1002 19:30:55.189133 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce9d133c0", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 187669952, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 187669952, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.176000 audit[1461]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.176000 audit[1461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc400d5f0 a2=0 a3=1 items=0 ppid=1440 pid=1461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.176000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:30:55.204000 audit[1466]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.204000 audit[1466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc568bd60 a2=0 a3=1 items=0 ppid=1440 pid=1466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.204000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:30:55.225294 kubelet[1440]: I1002 19:30:55.225265 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:30:55.231285 kubelet[1440]: E1002 19:30:55.231180 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 225201912, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.236088 kubelet[1440]: E1002 19:30:55.236055 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:30:55.241187 kubelet[1440]: E1002 19:30:55.241098 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 225226672, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.246189 kubelet[1440]: E1002 19:30:55.246087 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 225231552, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.246000 audit[1471]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1471 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.246000 audit[1471]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffcd7c9f60 a2=0 a3=1 items=0 ppid=1440 pid=1471 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:30:55.248000 audit[1472]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1472 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.248000 audit[1472]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffdf352660 a2=0 a3=1 items=0 ppid=1440 pid=1472 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.248000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:30:55.256000 audit[1475]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1475 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.256000 audit[1475]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffdc750650 a2=0 a3=1 items=0 ppid=1440 pid=1475 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.256000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:30:55.261000 audit[1478]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1478 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.261000 audit[1478]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=ffffd0115250 a2=0 a3=1 items=0 ppid=1440 pid=1478 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.261000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:30:55.262000 audit[1479]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1479 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.262000 audit[1479]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc40e1cb0 a2=0 a3=1 items=0 ppid=1440 pid=1479 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.262000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:30:55.263000 audit[1480]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.263000 audit[1480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd023ad30 a2=0 a3=1 items=0 ppid=1440 pid=1480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.263000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:30:55.267000 audit[1482]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.267000 audit[1482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffeadd30c0 a2=0 a3=1 items=0 ppid=1440 pid=1482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.267000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:30:55.270000 audit[1484]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1484 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.270000 audit[1484]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd820d520 a2=0 a3=1 items=0 ppid=1440 pid=1484 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.270000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:30:55.301000 audit[1488]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.301000 audit[1488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffd5a3ab00 a2=0 a3=1 items=0 ppid=1440 pid=1488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.301000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:30:55.304000 audit[1490]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1490 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.304000 audit[1490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffe3777400 a2=0 a3=1 items=0 ppid=1440 pid=1490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:30:55.313000 audit[1493]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1493 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.313000 audit[1493]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=540 a0=3 a1=ffffcbdeaf10 a2=0 a3=1 items=0 ppid=1440 pid=1493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:30:55.315604 kubelet[1440]: I1002 19:30:55.315567 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:30:55.315000 audit[1494]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.315000 audit[1494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd12a7050 a2=0 a3=1 items=0 ppid=1440 pid=1494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.315000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:30:55.315000 audit[1495]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.315000 audit[1495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc66d33d0 a2=0 a3=1 items=0 ppid=1440 pid=1495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.315000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:30:55.316000 audit[1496]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1496 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.316000 audit[1496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe27c26b0 a2=0 a3=1 items=0 ppid=1440 pid=1496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:30:55.316000 audit[1497]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1497 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.316000 audit[1497]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc24e8b60 a2=0 a3=1 items=0 ppid=1440 pid=1497 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:30:55.317000 audit[1499]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:30:55.317000 audit[1499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff2ecdd0 a2=0 a3=1 items=0 ppid=1440 pid=1499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:30:55.318000 audit[1500]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.318000 audit[1500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=ffffd95f6ae0 a2=0 a3=1 items=0 ppid=1440 pid=1500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:30:55.319000 audit[1501]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.319000 audit[1501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=132 a0=3 a1=fffff9c57e40 a2=0 a3=1 items=0 ppid=1440 pid=1501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.319000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:30:55.321000 audit[1503]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1503 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.321000 audit[1503]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=664 a0=3 a1=fffffaddfae0 a2=0 a3=1 items=0 ppid=1440 pid=1503 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.321000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:30:55.322000 audit[1504]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1504 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.322000 audit[1504]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff7700cf0 a2=0 a3=1 items=0 ppid=1440 pid=1504 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.322000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:30:55.323000 audit[1505]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1505 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.323000 audit[1505]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2677620 a2=0 a3=1 items=0 ppid=1440 pid=1505 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.323000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:30:55.326000 audit[1507]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1507 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.326000 audit[1507]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=216 a0=3 a1=fffff9bc0250 a2=0 a3=1 items=0 ppid=1440 pid=1507 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:30:55.329000 audit[1509]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1509 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.329000 audit[1509]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffd8563b0 a2=0 a3=1 items=0 ppid=1440 pid=1509 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.329000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:30:55.331000 audit[1511]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1511 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.331000 audit[1511]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=364 a0=3 a1=ffffefd81df0 a2=0 a3=1 items=0 ppid=1440 pid=1511 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:30:55.333000 audit[1513]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1513 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.333000 audit[1513]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=220 a0=3 a1=ffffc4624ea0 a2=0 a3=1 items=0 ppid=1440 pid=1513 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.333000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:30:55.338643 kubelet[1440]: E1002 19:30:55.338616 1440 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:30:55.337000 audit[1515]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1515 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.337000 audit[1515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=556 a0=3 a1=ffffeb5d9650 a2=0 a3=1 items=0 ppid=1440 pid=1515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.337000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:30:55.339299 kubelet[1440]: I1002 19:30:55.339281 1440 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:30:55.339351 kubelet[1440]: I1002 19:30:55.339304 1440 status_manager.go:176] "Starting to sync pod status with apiserver" Oct 2 19:30:55.339351 kubelet[1440]: I1002 19:30:55.339324 1440 kubelet.go:2113] "Starting kubelet main sync loop" Oct 2 19:30:55.339461 kubelet[1440]: E1002 19:30:55.339381 1440 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:30:55.339000 audit[1516]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1516 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.339000 audit[1516]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe97057e0 a2=0 a3=1 items=0 ppid=1440 pid=1516 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:30:55.340000 audit[1517]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1517 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.340000 audit[1517]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9b40280 a2=0 a3=1 items=0 ppid=1440 pid=1517 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.340000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:30:55.342718 kubelet[1440]: W1002 19:30:55.342685 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:55.342718 kubelet[1440]: E1002 19:30:55.342712 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:55.341000 audit[1518]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1518 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:30:55.341000 audit[1518]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffca440cd0 a2=0 a3=1 items=0 ppid=1440 pid=1518 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:30:55.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:30:55.439839 kubelet[1440]: I1002 19:30:55.437646 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:30:55.439986 kubelet[1440]: E1002 19:30:55.439703 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 437599592, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.442919 kubelet[1440]: E1002 19:30:55.442892 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:30:55.443873 kubelet[1440]: E1002 19:30:55.443775 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 437611472, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.521391 kubelet[1440]: E1002 19:30:55.521269 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 437617712, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.740728 kubelet[1440]: E1002 19:30:55.740618 1440 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:30:55.844080 kubelet[1440]: I1002 19:30:55.844052 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:30:55.845581 kubelet[1440]: E1002 19:30:55.845561 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:30:55.845833 kubelet[1440]: E1002 19:30:55.845752 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 844014192, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:55.921499 kubelet[1440]: E1002 19:30:55.921409 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 844020592, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.116822 kubelet[1440]: E1002 19:30:56.116707 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:56.121947 kubelet[1440]: E1002 19:30:56.121849 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 844023672, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.218402 kubelet[1440]: W1002 19:30:56.218357 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:56.218558 kubelet[1440]: E1002 19:30:56.218547 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:56.315202 kubelet[1440]: W1002 19:30:56.315176 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:56.315361 kubelet[1440]: E1002 19:30:56.315350 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:56.376131 kubelet[1440]: W1002 19:30:56.376034 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:56.376286 kubelet[1440]: E1002 19:30:56.376272 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:56.530147 kubelet[1440]: W1002 19:30:56.530112 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:56.530147 kubelet[1440]: E1002 19:30:56.530146 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:56.542646 kubelet[1440]: E1002 19:30:56.542609 1440 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:30:56.647133 kubelet[1440]: I1002 19:30:56.647038 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:30:56.648632 kubelet[1440]: E1002 19:30:56.648608 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:30:56.649372 kubelet[1440]: E1002 19:30:56.649291 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 646984632, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.650688 kubelet[1440]: E1002 19:30:56.650608 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 646994792, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:56.722339 kubelet[1440]: E1002 19:30:56.722239 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 56, 646998512, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:57.117487 kubelet[1440]: E1002 19:30:57.117361 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.118336 kubelet[1440]: E1002 19:30:58.118280 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.148242 kubelet[1440]: E1002 19:30:58.148209 1440 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:30:58.250023 kubelet[1440]: I1002 19:30:58.249999 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:30:58.251126 kubelet[1440]: E1002 19:30:58.250994 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 58, 249954832, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:58.251673 kubelet[1440]: E1002 19:30:58.251646 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:30:58.252037 kubelet[1440]: E1002 19:30:58.251971 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 58, 249967712, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:58.252910 kubelet[1440]: E1002 19:30:58.252845 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 30, 58, 249970752, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:30:58.408919 kubelet[1440]: W1002 19:30:58.408825 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:58.409075 kubelet[1440]: E1002 19:30:58.409063 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:30:58.829800 kubelet[1440]: W1002 19:30:58.829685 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:58.829950 kubelet[1440]: E1002 19:30:58.829935 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:30:59.118636 kubelet[1440]: E1002 19:30:59.118601 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.346041 kubelet[1440]: W1002 19:30:59.346014 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:59.346219 kubelet[1440]: E1002 19:30:59.346207 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:30:59.446330 kubelet[1440]: W1002 19:30:59.446221 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:30:59.446330 kubelet[1440]: E1002 19:30:59.446259 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:31:00.119988 kubelet[1440]: E1002 19:31:00.119950 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.120673 kubelet[1440]: E1002 19:31:01.120634 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.351381 kubelet[1440]: E1002 19:31:01.351342 1440 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.0.0.12" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:31:01.452714 kubelet[1440]: I1002 19:31:01.452613 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:31:01.454648 kubelet[1440]: E1002 19:31:01.454618 1440 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.0.0.12" Oct 2 19:31:01.454832 kubelet[1440]: E1002 19:31:01.454632 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7383730", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.0.0.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144089392, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 1, 452565392, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7383730" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:01.456025 kubelet[1440]: E1002 19:31:01.455964 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7386430", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.0.0.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144100912, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 1, 452570992, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7386430" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:01.457342 kubelet[1440]: E1002 19:31:01.457275 1440 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.0.0.12.178a612ce7387308", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.0.0.12", UID:"10.0.0.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.0.0.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.0.0.12"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 30, 55, 144104712, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 31, 1, 452574512, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.0.0.12.178a612ce7387308" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:31:02.121526 kubelet[1440]: E1002 19:31:02.121490 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.859075 kubelet[1440]: W1002 19:31:02.858995 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:31:02.859075 kubelet[1440]: E1002 19:31:02.859032 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:31:03.063398 kubelet[1440]: W1002 19:31:03.063342 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:03.063398 kubelet[1440]: E1002 19:31:03.063380 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.0.0.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:31:03.122308 kubelet[1440]: E1002 19:31:03.122204 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.853587 kubelet[1440]: W1002 19:31:03.853558 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:31:03.853746 kubelet[1440]: E1002 19:31:03.853735 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:31:04.052367 kubelet[1440]: W1002 19:31:04.052337 1440 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:31:04.052568 kubelet[1440]: E1002 19:31:04.052557 1440 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:31:04.122540 kubelet[1440]: E1002 19:31:04.122441 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:05.104162 kubelet[1440]: I1002 19:31:05.104112 1440 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:31:05.123705 kubelet[1440]: E1002 19:31:05.123676 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:05.188065 kubelet[1440]: E1002 19:31:05.188028 1440 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:31:05.500931 kubelet[1440]: E1002 19:31:05.500826 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:31:06.124298 kubelet[1440]: E1002 19:31:06.124263 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.541227 kubelet[1440]: E1002 19:31:06.541131 1440 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.0.0.12" not found Oct 2 19:31:07.125511 kubelet[1440]: E1002 19:31:07.125477 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.757367 kubelet[1440]: E1002 19:31:07.757323 1440 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.0.0.12\" not found" node="10.0.0.12" Oct 2 19:31:07.855545 kubelet[1440]: I1002 19:31:07.855517 1440 kubelet_node_status.go:70] "Attempting to register node" node="10.0.0.12" Oct 2 19:31:07.943156 kubelet[1440]: I1002 19:31:07.943096 1440 kubelet_node_status.go:73] "Successfully registered node" node="10.0.0.12" Oct 2 19:31:07.951584 kubelet[1440]: E1002 19:31:07.951540 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.052606 kubelet[1440]: E1002 19:31:08.052492 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.070712 sudo[1267]: pam_unix(sudo:session): session closed for user root Oct 2 19:31:08.069000 audit[1267]: USER_END pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.071828 kernel: kauditd_printk_skb: 474 callbacks suppressed Oct 2 19:31:08.071874 kernel: audit: type=1106 audit(1696275068.069:573): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.069000 audit[1267]: CRED_DISP pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.073599 sshd[1263]: pam_unix(sshd:session): session closed for user core Oct 2 19:31:08.075889 kernel: audit: type=1104 audit(1696275068.069:574): pid=1267 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.074000 audit[1263]: USER_END pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:08.078932 kernel: audit: type=1106 audit(1696275068.074:575): pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:08.079231 systemd[1]: sshd@6-10.0.0.12:22-10.0.0.1:48050.service: Deactivated successfully. Oct 2 19:31:08.074000 audit[1263]: CRED_DISP pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:08.080023 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:31:08.082494 kernel: audit: type=1104 audit(1696275068.074:576): pid=1263 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/sbin/sshd" hostname=10.0.0.1 addr=10.0.0.1 terminal=ssh res=success' Oct 2 19:31:08.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:48050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.083136 systemd-logind[1126]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:31:08.084765 kernel: audit: type=1131 audit(1696275068.078:577): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.0.0.12:22-10.0.0.1:48050 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:31:08.085510 systemd-logind[1126]: Removed session 7. Oct 2 19:31:08.126442 kubelet[1440]: E1002 19:31:08.126408 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.153340 kubelet[1440]: E1002 19:31:08.153292 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.254017 kubelet[1440]: E1002 19:31:08.253980 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.354706 kubelet[1440]: E1002 19:31:08.354666 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.455274 kubelet[1440]: E1002 19:31:08.455198 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.555405 kubelet[1440]: E1002 19:31:08.555339 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.656368 kubelet[1440]: E1002 19:31:08.656011 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.756760 kubelet[1440]: E1002 19:31:08.756684 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.857436 kubelet[1440]: E1002 19:31:08.857350 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:08.958453 kubelet[1440]: E1002 19:31:08.958039 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.058957 kubelet[1440]: E1002 19:31:09.058818 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.127867 kubelet[1440]: E1002 19:31:09.127798 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.158999 kubelet[1440]: E1002 19:31:09.158946 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.260069 kubelet[1440]: E1002 19:31:09.259689 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.359941 kubelet[1440]: E1002 19:31:09.359905 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.460945 kubelet[1440]: E1002 19:31:09.460910 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.561835 kubelet[1440]: E1002 19:31:09.561482 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.662245 kubelet[1440]: E1002 19:31:09.662197 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.762801 kubelet[1440]: E1002 19:31:09.762754 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.863530 kubelet[1440]: E1002 19:31:09.863491 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:09.964128 kubelet[1440]: E1002 19:31:09.964091 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.065018 kubelet[1440]: E1002 19:31:10.064982 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.128529 kubelet[1440]: E1002 19:31:10.128149 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:10.165467 kubelet[1440]: E1002 19:31:10.165423 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.266611 kubelet[1440]: E1002 19:31:10.266559 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.366947 kubelet[1440]: E1002 19:31:10.366914 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.468369 kubelet[1440]: E1002 19:31:10.467987 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.568812 kubelet[1440]: E1002 19:31:10.568742 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.669295 kubelet[1440]: E1002 19:31:10.669090 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.770559 kubelet[1440]: E1002 19:31:10.770174 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.870998 kubelet[1440]: E1002 19:31:10.870939 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:10.971753 kubelet[1440]: E1002 19:31:10.971689 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.074011 kubelet[1440]: E1002 19:31:11.072168 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.128888 kubelet[1440]: E1002 19:31:11.128828 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:11.173156 kubelet[1440]: E1002 19:31:11.173094 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.274055 kubelet[1440]: E1002 19:31:11.273989 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.374865 kubelet[1440]: E1002 19:31:11.374831 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.476045 kubelet[1440]: E1002 19:31:11.475998 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.576642 kubelet[1440]: E1002 19:31:11.576604 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.678073 kubelet[1440]: E1002 19:31:11.677012 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.778775 kubelet[1440]: E1002 19:31:11.777723 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.879765 kubelet[1440]: E1002 19:31:11.878758 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:11.979892 kubelet[1440]: E1002 19:31:11.979536 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.080101 kubelet[1440]: E1002 19:31:12.080065 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.129928 kubelet[1440]: E1002 19:31:12.129887 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.181212 kubelet[1440]: E1002 19:31:12.181148 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.281603 kubelet[1440]: E1002 19:31:12.281220 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.384867 kubelet[1440]: E1002 19:31:12.384787 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.485015 kubelet[1440]: E1002 19:31:12.484952 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.586084 kubelet[1440]: E1002 19:31:12.585463 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.686517 kubelet[1440]: E1002 19:31:12.686473 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.787443 kubelet[1440]: E1002 19:31:12.786932 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.887946 kubelet[1440]: E1002 19:31:12.887880 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:12.988473 kubelet[1440]: E1002 19:31:12.988424 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.088993 kubelet[1440]: E1002 19:31:13.088947 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.130660 kubelet[1440]: E1002 19:31:13.130627 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.190415 kubelet[1440]: E1002 19:31:13.190016 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.290654 kubelet[1440]: E1002 19:31:13.290606 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.391336 kubelet[1440]: E1002 19:31:13.391273 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.492164 kubelet[1440]: E1002 19:31:13.491789 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.592308 kubelet[1440]: E1002 19:31:13.592243 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.692991 kubelet[1440]: E1002 19:31:13.692940 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.793857 kubelet[1440]: E1002 19:31:13.793471 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.894581 kubelet[1440]: E1002 19:31:13.894077 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:13.995000 kubelet[1440]: E1002 19:31:13.994935 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.095677 kubelet[1440]: E1002 19:31:14.095631 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.131560 kubelet[1440]: E1002 19:31:14.131508 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.196037 kubelet[1440]: E1002 19:31:14.195986 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.296966 kubelet[1440]: E1002 19:31:14.296911 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.397987 kubelet[1440]: E1002 19:31:14.397596 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.498244 kubelet[1440]: E1002 19:31:14.498166 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.598902 kubelet[1440]: E1002 19:31:14.598841 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.700808 kubelet[1440]: E1002 19:31:14.699574 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.800402 kubelet[1440]: E1002 19:31:14.800333 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:14.901129 kubelet[1440]: E1002 19:31:14.901074 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:15.002258 kubelet[1440]: E1002 19:31:15.001893 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:15.103066 kubelet[1440]: E1002 19:31:15.103005 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:15.116562 kubelet[1440]: E1002 19:31:15.116509 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.132104 kubelet[1440]: E1002 19:31:15.132043 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.188676 kubelet[1440]: E1002 19:31:15.188615 1440 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.0.0.12\" not found" Oct 2 19:31:15.203790 kubelet[1440]: E1002 19:31:15.203743 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:15.304991 kubelet[1440]: E1002 19:31:15.304547 1440 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.0.0.12\" not found" Oct 2 19:31:15.405558 kubelet[1440]: I1002 19:31:15.405530 1440 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:31:15.405823 env[1138]: time="2023-10-02T19:31:15.405787392Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:31:15.406097 kubelet[1440]: I1002 19:31:15.405943 1440 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:31:16.127927 kubelet[1440]: I1002 19:31:16.127891 1440 apiserver.go:52] "Watching apiserver" Oct 2 19:31:16.131282 kubelet[1440]: I1002 19:31:16.131219 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:31:16.131368 kubelet[1440]: I1002 19:31:16.131295 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:31:16.132558 kubelet[1440]: E1002 19:31:16.132138 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.141811 systemd[1]: Created slice kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice. Oct 2 19:31:16.189530 systemd[1]: Created slice kubepods-besteffort-podddfeb056_d856_4feb_aceb_53d573d00838.slice. Oct 2 19:31:16.225783 kubelet[1440]: I1002 19:31:16.225745 1440 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Oct 2 19:31:16.253109 kubelet[1440]: I1002 19:31:16.253071 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-xtables-lock\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253109 kubelet[1440]: I1002 19:31:16.253117 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ddfeb056-d856-4feb-aceb-53d573d00838-xtables-lock\") pod \"kube-proxy-2nqzn\" (UID: \"ddfeb056-d856-4feb-aceb-53d573d00838\") " pod="kube-system/kube-proxy-2nqzn" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253139 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ddfeb056-d856-4feb-aceb-53d573d00838-lib-modules\") pod \"kube-proxy-2nqzn\" (UID: \"ddfeb056-d856-4feb-aceb-53d573d00838\") " pod="kube-system/kube-proxy-2nqzn" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253161 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzjtl\" (UniqueName: \"kubernetes.io/projected/ddfeb056-d856-4feb-aceb-53d573d00838-kube-api-access-rzjtl\") pod \"kube-proxy-2nqzn\" (UID: \"ddfeb056-d856-4feb-aceb-53d573d00838\") " pod="kube-system/kube-proxy-2nqzn" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253185 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hostproc\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253207 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-config-path\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253228 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-run\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253282 kubelet[1440]: I1002 19:31:16.253245 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-bpf-maps\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253435 kubelet[1440]: I1002 19:31:16.253264 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-cgroup\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253435 kubelet[1440]: I1002 19:31:16.253313 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cni-path\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253435 kubelet[1440]: I1002 19:31:16.253368 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-kernel\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253435 kubelet[1440]: I1002 19:31:16.253424 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hubble-tls\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253523 kubelet[1440]: I1002 19:31:16.253473 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/ddfeb056-d856-4feb-aceb-53d573d00838-kube-proxy\") pod \"kube-proxy-2nqzn\" (UID: \"ddfeb056-d856-4feb-aceb-53d573d00838\") " pod="kube-system/kube-proxy-2nqzn" Oct 2 19:31:16.253523 kubelet[1440]: I1002 19:31:16.253510 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-etc-cni-netd\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253567 kubelet[1440]: I1002 19:31:16.253558 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-lib-modules\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253609 kubelet[1440]: I1002 19:31:16.253590 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c4d5228-dda2-4408-861d-6ecd5514d1a3-clustermesh-secrets\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253643 kubelet[1440]: I1002 19:31:16.253622 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-net\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253675 kubelet[1440]: I1002 19:31:16.253652 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z878n\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-kube-api-access-z878n\") pod \"cilium-lhssz\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " pod="kube-system/cilium-lhssz" Oct 2 19:31:16.253716 kubelet[1440]: I1002 19:31:16.253678 1440 reconciler.go:41] "Reconciler: start to sync state" Oct 2 19:31:16.503180 kubelet[1440]: E1002 19:31:16.502204 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:16.505922 env[1138]: time="2023-10-02T19:31:16.505410192Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nqzn,Uid:ddfeb056-d856-4feb-aceb-53d573d00838,Namespace:kube-system,Attempt:0,}" Oct 2 19:31:16.785487 kubelet[1440]: E1002 19:31:16.785108 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:16.785892 env[1138]: time="2023-10-02T19:31:16.785859392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lhssz,Uid:0c4d5228-dda2-4408-861d-6ecd5514d1a3,Namespace:kube-system,Attempt:0,}" Oct 2 19:31:17.132699 kubelet[1440]: E1002 19:31:17.132649 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.296437 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3893273109.mount: Deactivated successfully. Oct 2 19:31:17.301804 env[1138]: time="2023-10-02T19:31:17.301758432Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.302749 env[1138]: time="2023-10-02T19:31:17.302698032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.303719 env[1138]: time="2023-10-02T19:31:17.303684712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.308071 env[1138]: time="2023-10-02T19:31:17.308040192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:7d46a07936af93fcce097459055f93ab07331509aa55f4a2a90d95a3ace1850e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.309106 env[1138]: time="2023-10-02T19:31:17.309054992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.310855 env[1138]: time="2023-10-02T19:31:17.310826432Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.313354 env[1138]: time="2023-10-02T19:31:17.313323112Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.315253 env[1138]: time="2023-10-02T19:31:17.315224272Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:17.344549 env[1138]: time="2023-10-02T19:31:17.344480472Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:31:17.344549 env[1138]: time="2023-10-02T19:31:17.344522432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:31:17.344549 env[1138]: time="2023-10-02T19:31:17.344533112Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:31:17.344865 env[1138]: time="2023-10-02T19:31:17.344833032Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e pid=1542 runtime=io.containerd.runc.v2 Oct 2 19:31:17.345410 env[1138]: time="2023-10-02T19:31:17.345337992Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:31:17.345478 env[1138]: time="2023-10-02T19:31:17.345419712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:31:17.345478 env[1138]: time="2023-10-02T19:31:17.345448192Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:31:17.345646 env[1138]: time="2023-10-02T19:31:17.345604992Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/fd977bba28aaca5626a3363a6a3662414fde678315b3643dcdeb3b8170858007 pid=1544 runtime=io.containerd.runc.v2 Oct 2 19:31:17.369203 systemd[1]: Started cri-containerd-fd977bba28aaca5626a3363a6a3662414fde678315b3643dcdeb3b8170858007.scope. Oct 2 19:31:17.375493 systemd[1]: Started cri-containerd-f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e.scope. Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.406490 kernel: audit: type=1400 audit(1696275077.401:578): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.406525 kernel: audit: type=1400 audit(1696275077.401:579): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.406540 kernel: audit: type=1400 audit(1696275077.401:580): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.408581 kernel: audit: type=1400 audit(1696275077.401:581): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.410298 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:31:17.410357 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:31:17.410374 kernel: audit: backlog limit exceeded Oct 2 19:31:17.410429 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:31:17.410446 kernel: audit: audit_lost=2 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:31:17.410459 kernel: audit: backlog limit exceeded Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.401000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit: BPF prog-id=67 op=LOAD Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=40001c5b38 a2=10 a3=0 items=0 ppid=1542 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637623363613364363438663037663466343739633963346331303633 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001c55a0 a2=3c a3=0 items=0 ppid=1542 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637623363613364363438663037663466343739633963346331303633 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.403000 audit: BPF prog-id=68 op=LOAD Oct 2 19:31:17.404000 audit: BPF prog-id=69 op=LOAD Oct 2 19:31:17.403000 audit[1561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c58e0 a2=78 a3=0 items=0 ppid=1542 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.403000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637623363613364363438663037663466343739633963346331303633 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.404000 audit: BPF prog-id=70 op=LOAD Oct 2 19:31:17.404000 audit[1561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=40001c5670 a2=78 a3=0 items=0 ppid=1542 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.404000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637623363613364363438663037663466343739633963346331303633 Oct 2 19:31:17.407000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:31:17.407000 audit: BPF prog-id=68 op=UNLOAD Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { perfmon } for pid=1561 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: AVC avc: denied { bpf } for pid=1561 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.407000 audit[1561]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001c5b40 a2=78 a3=0 items=0 ppid=1542 pid=1561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637623363613364363438663037663466343739633963346331303633 Oct 2 19:31:17.407000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000145b38 a2=10 a3=0 items=0 ppid=1544 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.407000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393737626261323861616361353632366133333633613661333636 Oct 2 19:31:17.413000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.413000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1544 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393737626261323861616361353632366133333633613661333636 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit: BPF prog-id=72 op=LOAD Oct 2 19:31:17.414000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=1544 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393737626261323861616361353632366133333633613661333636 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.414000 audit: BPF prog-id=73 op=LOAD Oct 2 19:31:17.414000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=1544 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393737626261323861616361353632366133333633613661333636 Oct 2 19:31:17.415000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:31:17.415000 audit: BPF prog-id=72 op=UNLOAD Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { perfmon } for pid=1562 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit[1562]: AVC avc: denied { bpf } for pid=1562 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:17.415000 audit: BPF prog-id=74 op=LOAD Oct 2 19:31:17.415000 audit[1562]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=1544 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:17.415000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6664393737626261323861616361353632366133333633613661333636 Oct 2 19:31:17.430427 env[1138]: time="2023-10-02T19:31:17.430339872Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lhssz,Uid:0c4d5228-dda2-4408-861d-6ecd5514d1a3,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\"" Oct 2 19:31:17.431938 kubelet[1440]: E1002 19:31:17.431433 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:17.433138 env[1138]: time="2023-10-02T19:31:17.433090472Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Oct 2 19:31:17.435992 env[1138]: time="2023-10-02T19:31:17.435953272Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-2nqzn,Uid:ddfeb056-d856-4feb-aceb-53d573d00838,Namespace:kube-system,Attempt:0,} returns sandbox id \"fd977bba28aaca5626a3363a6a3662414fde678315b3643dcdeb3b8170858007\"" Oct 2 19:31:17.437131 kubelet[1440]: E1002 19:31:17.436964 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:18.133209 kubelet[1440]: E1002 19:31:18.133169 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.134064 kubelet[1440]: E1002 19:31:19.134013 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:20.134839 kubelet[1440]: E1002 19:31:20.134789 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.135733 kubelet[1440]: E1002 19:31:21.135685 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.212670 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3939937910.mount: Deactivated successfully. Oct 2 19:31:22.136326 kubelet[1440]: E1002 19:31:22.136281 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:23.137152 kubelet[1440]: E1002 19:31:23.137118 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:23.623822 env[1138]: time="2023-10-02T19:31:23.623768709Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:23.624872 env[1138]: time="2023-10-02T19:31:23.624845789Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:23.626182 env[1138]: time="2023-10-02T19:31:23.626149949Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:23.627071 env[1138]: time="2023-10-02T19:31:23.627038349Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:b69cb5ebb22d9b4f9c460a6587a0c4285d57a2bff59e4e439ad065a3f684948f\"" Oct 2 19:31:23.627788 env[1138]: time="2023-10-02T19:31:23.627757789Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\"" Oct 2 19:31:23.629210 env[1138]: time="2023-10-02T19:31:23.629179589Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:31:23.640235 env[1138]: time="2023-10-02T19:31:23.640188670Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" Oct 2 19:31:23.641047 env[1138]: time="2023-10-02T19:31:23.641022870Z" level=info msg="StartContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" Oct 2 19:31:23.661953 systemd[1]: Started cri-containerd-2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c.scope. Oct 2 19:31:23.681995 systemd[1]: cri-containerd-2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c.scope: Deactivated successfully. Oct 2 19:31:23.685577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c-rootfs.mount: Deactivated successfully. Oct 2 19:31:23.823707 env[1138]: time="2023-10-02T19:31:23.823652560Z" level=info msg="shim disconnected" id=2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c Oct 2 19:31:23.823707 env[1138]: time="2023-10-02T19:31:23.823710280Z" level=warning msg="cleaning up after shim disconnected" id=2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c namespace=k8s.io Oct 2 19:31:23.823707 env[1138]: time="2023-10-02T19:31:23.823720560Z" level=info msg="cleaning up dead shim" Oct 2 19:31:23.832986 env[1138]: time="2023-10-02T19:31:23.832941281Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1643 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:23.833283 env[1138]: time="2023-10-02T19:31:23.833192601Z" level=error msg="copy shim log" error="read /proc/self/fd/44: file already closed" Oct 2 19:31:23.833434 env[1138]: time="2023-10-02T19:31:23.833402161Z" level=error msg="Failed to pipe stdout of container \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" error="reading from a closed fifo" Oct 2 19:31:23.833519 env[1138]: time="2023-10-02T19:31:23.833465481Z" level=error msg="Failed to pipe stderr of container \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" error="reading from a closed fifo" Oct 2 19:31:23.834731 env[1138]: time="2023-10-02T19:31:23.834651201Z" level=error msg="StartContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:23.835140 kubelet[1440]: E1002 19:31:23.834929 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c" Oct 2 19:31:23.835140 kubelet[1440]: E1002 19:31:23.835064 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:23.835140 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:23.835140 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:23.835300 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z878n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:23.835376 kubelet[1440]: E1002 19:31:23.835104 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:24.137784 kubelet[1440]: E1002 19:31:24.137729 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.389244 kubelet[1440]: E1002 19:31:24.389122 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:24.391436 env[1138]: time="2023-10-02T19:31:24.391392353Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:31:24.403743 env[1138]: time="2023-10-02T19:31:24.403659794Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" Oct 2 19:31:24.404307 env[1138]: time="2023-10-02T19:31:24.404158474Z" level=info msg="StartContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" Oct 2 19:31:24.422973 systemd[1]: Started cri-containerd-7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7.scope. Oct 2 19:31:24.442256 systemd[1]: cri-containerd-7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7.scope: Deactivated successfully. Oct 2 19:31:24.467780 env[1138]: time="2023-10-02T19:31:24.467708237Z" level=info msg="shim disconnected" id=7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7 Oct 2 19:31:24.468039 env[1138]: time="2023-10-02T19:31:24.468021197Z" level=warning msg="cleaning up after shim disconnected" id=7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7 namespace=k8s.io Oct 2 19:31:24.468106 env[1138]: time="2023-10-02T19:31:24.468091677Z" level=info msg="cleaning up dead shim" Oct 2 19:31:24.478769 env[1138]: time="2023-10-02T19:31:24.478713118Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1679 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:24Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:24.479207 env[1138]: time="2023-10-02T19:31:24.479139798Z" level=error msg="copy shim log" error="read /proc/self/fd/52: file already closed" Oct 2 19:31:24.480806 env[1138]: time="2023-10-02T19:31:24.479595638Z" level=error msg="Failed to pipe stdout of container \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" error="reading from a closed fifo" Oct 2 19:31:24.482077 env[1138]: time="2023-10-02T19:31:24.482035078Z" level=error msg="Failed to pipe stderr of container \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" error="reading from a closed fifo" Oct 2 19:31:24.483885 env[1138]: time="2023-10-02T19:31:24.483833718Z" level=error msg="StartContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:24.484127 kubelet[1440]: E1002 19:31:24.484093 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7" Oct 2 19:31:24.484238 kubelet[1440]: E1002 19:31:24.484216 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:24.484238 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:24.484238 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:24.484238 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z878n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:24.484487 kubelet[1440]: E1002 19:31:24.484267 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:24.792058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3286625692.mount: Deactivated successfully. Oct 2 19:31:25.142738 kubelet[1440]: E1002 19:31:25.138868 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:25.171971 env[1138]: time="2023-10-02T19:31:25.171926516Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:25.174449 env[1138]: time="2023-10-02T19:31:25.174410556Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:25.177508 env[1138]: time="2023-10-02T19:31:25.177471716Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:25.178680 env[1138]: time="2023-10-02T19:31:25.178641916Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d8c8e3e8fe630c3f2d84a22722d4891343196483ac4cc02c1ba9345b1bfc8a3d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:31:25.179116 env[1138]: time="2023-10-02T19:31:25.179075116Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.9\" returns image reference \"sha256:0393a046c6ac3c39d56f9b536c02216184f07904e0db26449490d0cb1d1fe343\"" Oct 2 19:31:25.181827 env[1138]: time="2023-10-02T19:31:25.181792356Z" level=info msg="CreateContainer within sandbox \"fd977bba28aaca5626a3363a6a3662414fde678315b3643dcdeb3b8170858007\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:31:25.197999 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount98528648.mount: Deactivated successfully. Oct 2 19:31:25.206782 env[1138]: time="2023-10-02T19:31:25.206717958Z" level=info msg="CreateContainer within sandbox \"fd977bba28aaca5626a3363a6a3662414fde678315b3643dcdeb3b8170858007\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"365da07e376951517198d64012e1442ffb7c58867cf7d86d02b143734cfc7bc7\"" Oct 2 19:31:25.207492 env[1138]: time="2023-10-02T19:31:25.207456078Z" level=info msg="StartContainer for \"365da07e376951517198d64012e1442ffb7c58867cf7d86d02b143734cfc7bc7\"" Oct 2 19:31:25.226208 systemd[1]: Started cri-containerd-365da07e376951517198d64012e1442ffb7c58867cf7d86d02b143734cfc7bc7.scope. Oct 2 19:31:25.269587 kernel: kauditd_printk_skb: 108 callbacks suppressed Oct 2 19:31:25.269707 kernel: audit: type=1400 audit(1696275085.266:614): avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.266000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.266000 audit[1700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1544 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.272502 kernel: audit: type=1300 audit(1696275085.266:614): arch=c00000b7 syscall=280 success=yes exit=15 a0=0 a1=40001455a0 a2=3c a3=0 items=0 ppid=1544 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.272557 kernel: audit: type=1327 audit(1696275085.266:614): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336356461303765333736393531353137313938643634303132653134 Oct 2 19:31:25.266000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336356461303765333736393531353137313938643634303132653134 Oct 2 19:31:25.274734 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.276320 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.278052 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.281209 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.283381 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.283462 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.286554 kernel: audit: type=1400 audit(1696275085.269:615): avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.269000 audit: BPF prog-id=75 op=LOAD Oct 2 19:31:25.269000 audit[1700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=40001458e0 a2=78 a3=0 items=0 ppid=1544 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.269000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336356461303765333736393531353137313938643634303132653134 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.271000 audit: BPF prog-id=76 op=LOAD Oct 2 19:31:25.271000 audit[1700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=17 a0=5 a1=4000145670 a2=78 a3=0 items=0 ppid=1544 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.271000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336356461303765333736393531353137313938643634303132653134 Oct 2 19:31:25.273000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:31:25.273000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { perfmon } for pid=1700 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit[1700]: AVC avc: denied { bpf } for pid=1700 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:31:25.273000 audit: BPF prog-id=77 op=LOAD Oct 2 19:31:25.273000 audit[1700]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=15 a0=5 a1=4000145b40 a2=78 a3=0 items=0 ppid=1544 pid=1700 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3336356461303765333736393531353137313938643634303132653134 Oct 2 19:31:25.293290 env[1138]: time="2023-10-02T19:31:25.292694922Z" level=info msg="StartContainer for \"365da07e376951517198d64012e1442ffb7c58867cf7d86d02b143734cfc7bc7\" returns successfully" Oct 2 19:31:25.377000 audit[1752]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1752 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.377000 audit[1752]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffe103540 a2=0 a3=ffffbc0626c0 items=0 ppid=1712 pid=1752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.377000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:25.378000 audit[1753]: NETFILTER_CFG table=mangle:36 family=10 entries=1 op=nft_register_chain pid=1753 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.378000 audit[1753]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffea4e1380 a2=0 a3=ffffba2de6c0 items=0 ppid=1712 pid=1753 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:31:25.379000 audit[1754]: NETFILTER_CFG table=nat:37 family=2 entries=1 op=nft_register_chain pid=1754 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.379000 audit[1754]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdd292e00 a2=0 a3=ffffad7f16c0 items=0 ppid=1712 pid=1754 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.379000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:25.380000 audit[1756]: NETFILTER_CFG table=nat:38 family=10 entries=1 op=nft_register_chain pid=1756 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.380000 audit[1756]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe06648e0 a2=0 a3=ffffadaba6c0 items=0 ppid=1712 pid=1756 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:31:25.381000 audit[1757]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_chain pid=1757 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.381000 audit[1757]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffffe28eb0 a2=0 a3=ffffb63e16c0 items=0 ppid=1712 pid=1757 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:25.382000 audit[1758]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1758 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.382000 audit[1758]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff83f2de0 a2=0 a3=ffff938e96c0 items=0 ppid=1712 pid=1758 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:31:25.393817 kubelet[1440]: E1002 19:31:25.392521 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:25.393817 kubelet[1440]: I1002 19:31:25.393092 1440 scope.go:115] "RemoveContainer" containerID="2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c" Oct 2 19:31:25.393954 kubelet[1440]: I1002 19:31:25.393868 1440 scope.go:115] "RemoveContainer" containerID="2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c" Oct 2 19:31:25.396521 env[1138]: time="2023-10-02T19:31:25.396475408Z" level=info msg="RemoveContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" Oct 2 19:31:25.397030 env[1138]: time="2023-10-02T19:31:25.397000368Z" level=info msg="RemoveContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\"" Oct 2 19:31:25.397141 env[1138]: time="2023-10-02T19:31:25.397083488Z" level=error msg="RemoveContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\" failed" error="failed to set removing state for container \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\": container is already in removing state" Oct 2 19:31:25.397276 kubelet[1440]: E1002 19:31:25.397201 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\": container is already in removing state" containerID="2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c" Oct 2 19:31:25.397276 kubelet[1440]: E1002 19:31:25.397243 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c": container is already in removing state; Skipping pod "cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)" Oct 2 19:31:25.397370 kubelet[1440]: E1002 19:31:25.397294 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:25.397528 kubelet[1440]: E1002 19:31:25.397506 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:25.399029 env[1138]: time="2023-10-02T19:31:25.398984888Z" level=info msg="RemoveContainer for \"2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c\" returns successfully" Oct 2 19:31:25.480000 audit[1759]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1759 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.480000 audit[1759]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd857c6f0 a2=0 a3=ffff957456c0 items=0 ppid=1712 pid=1759 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.480000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:25.483000 audit[1761]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1761 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.483000 audit[1761]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff0e29230 a2=0 a3=ffffad4896c0 items=0 ppid=1712 pid=1761 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.483000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:31:25.486000 audit[1764]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1764 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.486000 audit[1764]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffcb16a6d0 a2=0 a3=ffff900596c0 items=0 ppid=1712 pid=1764 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.486000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:31:25.487000 audit[1765]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1765 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.487000 audit[1765]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde372910 a2=0 a3=ffffbdeb86c0 items=0 ppid=1712 pid=1765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.487000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:25.489000 audit[1767]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1767 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.489000 audit[1767]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe1dd08b0 a2=0 a3=ffffaaaeb6c0 items=0 ppid=1712 pid=1767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.489000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:25.490000 audit[1768]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1768 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.490000 audit[1768]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe67ef240 a2=0 a3=ffffa12c96c0 items=0 ppid=1712 pid=1768 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:25.493000 audit[1770]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1770 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.493000 audit[1770]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdb7b3be0 a2=0 a3=ffff88c236c0 items=0 ppid=1712 pid=1770 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.493000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:25.497000 audit[1773]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1773 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.497000 audit[1773]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffc7583c60 a2=0 a3=ffffb7eb86c0 items=0 ppid=1712 pid=1773 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.497000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:31:25.498000 audit[1774]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1774 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.498000 audit[1774]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffefe50b90 a2=0 a3=ffffaaac96c0 items=0 ppid=1712 pid=1774 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.498000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:25.500000 audit[1776]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1776 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.500000 audit[1776]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffc1547c0 a2=0 a3=ffffbf2016c0 items=0 ppid=1712 pid=1776 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.500000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:25.502000 audit[1777]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1777 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.502000 audit[1777]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9a02cc0 a2=0 a3=ffff89c836c0 items=0 ppid=1712 pid=1777 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.502000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:25.504000 audit[1779]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1779 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.504000 audit[1779]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc88535e0 a2=0 a3=ffffb7ad96c0 items=0 ppid=1712 pid=1779 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.504000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:25.508000 audit[1782]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1782 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.508000 audit[1782]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3683a80 a2=0 a3=ffff980046c0 items=0 ppid=1712 pid=1782 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.508000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:25.512000 audit[1785]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1785 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.512000 audit[1785]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9ee8f20 a2=0 a3=ffffbad0a6c0 items=0 ppid=1712 pid=1785 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:25.513000 audit[1786]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1786 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.513000 audit[1786]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd92b16c0 a2=0 a3=ffffb4b516c0 items=0 ppid=1712 pid=1786 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.513000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:25.516000 audit[1788]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1788 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.516000 audit[1788]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=fffff6e3c020 a2=0 a3=ffff9a9f06c0 items=0 ppid=1712 pid=1788 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.516000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:25.523000 audit[1791]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1791 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:31:25.523000 audit[1791]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffd207a640 a2=0 a3=ffff9501f6c0 items=0 ppid=1712 pid=1791 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.523000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:25.537811 kubelet[1440]: I1002 19:31:25.537282 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-2nqzn" podStartSLOduration=-9.223372018317541e+09 pod.CreationTimestamp="2023-10-02 19:31:07 +0000 UTC" firstStartedPulling="2023-10-02 19:31:17.437351032 +0000 UTC m=+23.379911481" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:31:25.464104931 +0000 UTC m=+31.406665340" watchObservedRunningTime="2023-10-02 19:31:25.537234935 +0000 UTC m=+31.479795384" Oct 2 19:31:25.537000 audit[1795]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:25.537000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=4028 a0=3 a1=ffffdd7a00c0 a2=0 a3=ffff9709e6c0 items=0 ppid=1712 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:25.546000 audit[1795]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1795 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:31:25.546000 audit[1795]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5340 a0=3 a1=ffffdd7a00c0 a2=0 a3=ffff9709e6c0 items=0 ppid=1712 pid=1795 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.546000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:25.548000 audit[1801]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1801 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.548000 audit[1801]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc0dd88d0 a2=0 a3=ffffacb896c0 items=0 ppid=1712 pid=1801 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.548000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:31:25.551000 audit[1803]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1803 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.551000 audit[1803]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc0ed6ae0 a2=0 a3=ffff9fd706c0 items=0 ppid=1712 pid=1803 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.551000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:31:25.555000 audit[1806]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1806 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.555000 audit[1806]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc2a93960 a2=0 a3=ffff8e4836c0 items=0 ppid=1712 pid=1806 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.555000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:31:25.556000 audit[1807]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1807 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.556000 audit[1807]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdcb19e30 a2=0 a3=ffffa13906c0 items=0 ppid=1712 pid=1807 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.556000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:31:25.559000 audit[1809]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1809 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.559000 audit[1809]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe8434f50 a2=0 a3=ffff7fb346c0 items=0 ppid=1712 pid=1809 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.559000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:31:25.560000 audit[1810]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1810 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.560000 audit[1810]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf810830 a2=0 a3=ffff9a0516c0 items=0 ppid=1712 pid=1810 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.560000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:31:25.563000 audit[1812]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1812 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.563000 audit[1812]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe0540bd0 a2=0 a3=ffffb8a246c0 items=0 ppid=1712 pid=1812 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.563000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:31:25.566000 audit[1815]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1815 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.566000 audit[1815]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc56751d0 a2=0 a3=ffff7fbe06c0 items=0 ppid=1712 pid=1815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:31:25.568000 audit[1816]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1816 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.568000 audit[1816]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec73db30 a2=0 a3=ffffaed2a6c0 items=0 ppid=1712 pid=1816 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.568000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:31:25.570000 audit[1818]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1818 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.570000 audit[1818]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffea7f94f0 a2=0 a3=ffff8bd166c0 items=0 ppid=1712 pid=1818 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.570000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:31:25.572000 audit[1819]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1819 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.572000 audit[1819]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee766110 a2=0 a3=ffff89b096c0 items=0 ppid=1712 pid=1819 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.572000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:31:25.575000 audit[1821]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1821 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.575000 audit[1821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffcabe7c10 a2=0 a3=ffff835376c0 items=0 ppid=1712 pid=1821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.575000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:31:25.580000 audit[1824]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1824 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.580000 audit[1824]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebf98ab0 a2=0 a3=ffff844206c0 items=0 ppid=1712 pid=1824 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:31:25.584000 audit[1827]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1827 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.584000 audit[1827]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffddd8ece0 a2=0 a3=ffff83e0b6c0 items=0 ppid=1712 pid=1827 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.584000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:31:25.585000 audit[1828]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1828 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.585000 audit[1828]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd61957e0 a2=0 a3=ffffbf61c6c0 items=0 ppid=1712 pid=1828 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:31:25.588000 audit[1830]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1830 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.588000 audit[1830]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=600 a0=3 a1=ffffec41c510 a2=0 a3=ffffbbb026c0 items=0 ppid=1712 pid=1830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.588000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:25.592000 audit[1833]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1833 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:31:25.592000 audit[1833]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=608 a0=3 a1=ffffc96895c0 a2=0 a3=ffff987df6c0 items=0 ppid=1712 pid=1833 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:31:25.602000 audit[1837]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:25.602000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1916 a0=3 a1=ffffe6ac7550 a2=0 a3=ffffba8bb6c0 items=0 ppid=1712 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.602000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:25.606000 audit[1837]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1837 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:31:25.606000 audit[1837]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=1968 a0=3 a1=ffffe6ac7550 a2=0 a3=ffffba8bb6c0 items=0 ppid=1712 pid=1837 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:31:25.606000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:31:26.139838 kubelet[1440]: E1002 19:31:26.139800 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.396225 kubelet[1440]: E1002 19:31:26.396111 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:26.396877 kubelet[1440]: E1002 19:31:26.396852 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:26.397049 kubelet[1440]: E1002 19:31:26.397037 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:26.929447 kubelet[1440]: W1002 19:31:26.929395 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice/cri-containerd-2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c.scope WatchSource:0}: container "2c75583eda12666f784a6508900d1ae336b8984c1cf30dac6cff3d478bc0ff6c" in namespace "k8s.io": not found Oct 2 19:31:27.140892 kubelet[1440]: E1002 19:31:27.140846 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.141033 kubelet[1440]: E1002 19:31:28.140999 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:29.142341 kubelet[1440]: E1002 19:31:29.142251 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.036364 kubelet[1440]: W1002 19:31:30.036326 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice/cri-containerd-7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7.scope WatchSource:0}: task 7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7 not found: not found Oct 2 19:31:30.040867 kubelet[1440]: E1002 19:31:30.040833 1440 cadvisor_stats_provider.go:442] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podddfeb056_d856_4feb_aceb_53d573d00838.slice/cri-containerd-365da07e376951517198d64012e1442ffb7c58867cf7d86d02b143734cfc7bc7.scope\": RecentStats: unable to find data in memory cache]" Oct 2 19:31:30.143400 kubelet[1440]: E1002 19:31:30.143354 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.143960 kubelet[1440]: E1002 19:31:31.143917 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:32.144481 kubelet[1440]: E1002 19:31:32.144450 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.145176 kubelet[1440]: E1002 19:31:33.145140 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.146265 kubelet[1440]: E1002 19:31:34.146207 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.116635 kubelet[1440]: E1002 19:31:35.116588 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.146906 kubelet[1440]: E1002 19:31:35.146858 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.445244 update_engine[1129]: I1002 19:31:35.445099 1129 update_attempter.cc:505] Updating boot flags... Oct 2 19:31:36.147814 kubelet[1440]: E1002 19:31:36.147783 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.148871 kubelet[1440]: E1002 19:31:37.148483 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.149379 kubelet[1440]: E1002 19:31:38.149351 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.340341 kubelet[1440]: E1002 19:31:38.340310 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:38.342811 env[1138]: time="2023-10-02T19:31:38.342766551Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:31:38.352325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634277338.mount: Deactivated successfully. Oct 2 19:31:38.355733 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3982254211.mount: Deactivated successfully. Oct 2 19:31:38.358972 env[1138]: time="2023-10-02T19:31:38.358913471Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" Oct 2 19:31:38.359353 env[1138]: time="2023-10-02T19:31:38.359329591Z" level=info msg="StartContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" Oct 2 19:31:38.377046 systemd[1]: Started cri-containerd-170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93.scope. Oct 2 19:31:38.397253 systemd[1]: cri-containerd-170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93.scope: Deactivated successfully. Oct 2 19:31:38.525988 env[1138]: time="2023-10-02T19:31:38.525877635Z" level=info msg="shim disconnected" id=170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93 Oct 2 19:31:38.526190 env[1138]: time="2023-10-02T19:31:38.526171075Z" level=warning msg="cleaning up after shim disconnected" id=170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93 namespace=k8s.io Oct 2 19:31:38.526248 env[1138]: time="2023-10-02T19:31:38.526235315Z" level=info msg="cleaning up dead shim" Oct 2 19:31:38.535025 env[1138]: time="2023-10-02T19:31:38.534980915Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:31:38Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1877 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:31:38Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:31:38.535453 env[1138]: time="2023-10-02T19:31:38.535379675Z" level=error msg="copy shim log" error="read /proc/self/fd/58: file already closed" Oct 2 19:31:38.535851 env[1138]: time="2023-10-02T19:31:38.535807715Z" level=error msg="Failed to pipe stdout of container \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" error="reading from a closed fifo" Oct 2 19:31:38.535920 env[1138]: time="2023-10-02T19:31:38.535899235Z" level=error msg="Failed to pipe stderr of container \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" error="reading from a closed fifo" Oct 2 19:31:38.538069 env[1138]: time="2023-10-02T19:31:38.538011435Z" level=error msg="StartContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:31:38.538901 kubelet[1440]: E1002 19:31:38.538273 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93" Oct 2 19:31:38.538901 kubelet[1440]: E1002 19:31:38.538374 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:31:38.538901 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:31:38.538901 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:31:38.539092 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z878n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:31:38.539156 kubelet[1440]: E1002 19:31:38.538428 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:39.150209 kubelet[1440]: E1002 19:31:39.150158 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.350827 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93-rootfs.mount: Deactivated successfully. Oct 2 19:31:39.420326 kubelet[1440]: I1002 19:31:39.419890 1440 scope.go:115] "RemoveContainer" containerID="7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7" Oct 2 19:31:39.420326 kubelet[1440]: I1002 19:31:39.420235 1440 scope.go:115] "RemoveContainer" containerID="7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7" Oct 2 19:31:39.421213 env[1138]: time="2023-10-02T19:31:39.421177375Z" level=info msg="RemoveContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" Oct 2 19:31:39.421752 env[1138]: time="2023-10-02T19:31:39.421728495Z" level=info msg="RemoveContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\"" Oct 2 19:31:39.421915 env[1138]: time="2023-10-02T19:31:39.421884575Z" level=error msg="RemoveContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\" failed" error="failed to set removing state for container \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\": container is already in removing state" Oct 2 19:31:39.422085 kubelet[1440]: E1002 19:31:39.422071 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\": container is already in removing state" containerID="7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7" Oct 2 19:31:39.422197 kubelet[1440]: I1002 19:31:39.422186 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7} err="rpc error: code = Unknown desc = failed to set removing state for container \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\": container is already in removing state" Oct 2 19:31:39.423248 env[1138]: time="2023-10-02T19:31:39.423217495Z" level=info msg="RemoveContainer for \"7e3f28e42612def0f5bdc1e90912a379783a7bb82a98657d2c2716cd5a0e30f7\" returns successfully" Oct 2 19:31:39.423481 kubelet[1440]: E1002 19:31:39.423464 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:39.424668 kubelet[1440]: E1002 19:31:39.424648 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:40.151200 kubelet[1440]: E1002 19:31:40.151139 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:41.151473 kubelet[1440]: E1002 19:31:41.151429 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:41.633576 kubelet[1440]: W1002 19:31:41.633542 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice/cri-containerd-170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93.scope WatchSource:0}: task 170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93 not found: not found Oct 2 19:31:42.152456 kubelet[1440]: E1002 19:31:42.152424 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:43.154002 kubelet[1440]: E1002 19:31:43.153953 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.154904 kubelet[1440]: E1002 19:31:44.154875 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.156011 kubelet[1440]: E1002 19:31:45.155975 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:46.156746 kubelet[1440]: E1002 19:31:46.156691 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.157545 kubelet[1440]: E1002 19:31:47.157480 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.158010 kubelet[1440]: E1002 19:31:48.157964 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.158853 kubelet[1440]: E1002 19:31:49.158785 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.159613 kubelet[1440]: E1002 19:31:50.159544 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:51.159896 kubelet[1440]: E1002 19:31:51.159866 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.160970 kubelet[1440]: E1002 19:31:52.160927 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.161429 kubelet[1440]: E1002 19:31:53.161373 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.162514 kubelet[1440]: E1002 19:31:54.162457 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.339979 kubelet[1440]: E1002 19:31:54.339921 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:31:54.340155 kubelet[1440]: E1002 19:31:54.340129 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:31:55.117153 kubelet[1440]: E1002 19:31:55.117109 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.163368 kubelet[1440]: E1002 19:31:55.163327 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:56.164037 kubelet[1440]: E1002 19:31:56.163974 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.164159 kubelet[1440]: E1002 19:31:57.164102 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:58.164486 kubelet[1440]: E1002 19:31:58.164429 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.165609 kubelet[1440]: E1002 19:31:59.165577 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.166090 kubelet[1440]: E1002 19:32:00.166055 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:01.167605 kubelet[1440]: E1002 19:32:01.167562 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:02.168042 kubelet[1440]: E1002 19:32:02.167985 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.168905 kubelet[1440]: E1002 19:32:03.168855 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.169653 kubelet[1440]: E1002 19:32:04.169628 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.170619 kubelet[1440]: E1002 19:32:05.170555 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:06.170930 kubelet[1440]: E1002 19:32:06.170883 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.171413 kubelet[1440]: E1002 19:32:07.171355 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.340683 kubelet[1440]: E1002 19:32:07.340655 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:07.343021 env[1138]: time="2023-10-02T19:32:07.342977691Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:32:07.351119 env[1138]: time="2023-10-02T19:32:07.351066811Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" Oct 2 19:32:07.352355 env[1138]: time="2023-10-02T19:32:07.351523771Z" level=info msg="StartContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" Oct 2 19:32:07.368625 systemd[1]: Started cri-containerd-ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791.scope. Oct 2 19:32:07.412815 systemd[1]: cri-containerd-ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791.scope: Deactivated successfully. Oct 2 19:32:07.416728 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791-rootfs.mount: Deactivated successfully. Oct 2 19:32:07.421273 env[1138]: time="2023-10-02T19:32:07.421229171Z" level=info msg="shim disconnected" id=ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791 Oct 2 19:32:07.421563 env[1138]: time="2023-10-02T19:32:07.421488451Z" level=warning msg="cleaning up after shim disconnected" id=ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791 namespace=k8s.io Oct 2 19:32:07.421651 env[1138]: time="2023-10-02T19:32:07.421635691Z" level=info msg="cleaning up dead shim" Oct 2 19:32:07.429332 env[1138]: time="2023-10-02T19:32:07.429295371Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:07Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1920 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:07Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:07.429709 env[1138]: time="2023-10-02T19:32:07.429656011Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:32:07.429887 env[1138]: time="2023-10-02T19:32:07.429857251Z" level=error msg="Failed to pipe stdout of container \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" error="reading from a closed fifo" Oct 2 19:32:07.429974 env[1138]: time="2023-10-02T19:32:07.429913091Z" level=error msg="Failed to pipe stderr of container \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" error="reading from a closed fifo" Oct 2 19:32:07.431495 env[1138]: time="2023-10-02T19:32:07.431447171Z" level=error msg="StartContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:07.431858 kubelet[1440]: E1002 19:32:07.431678 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791" Oct 2 19:32:07.431858 kubelet[1440]: E1002 19:32:07.431772 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:07.431858 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:07.431858 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:32:07.432011 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z878n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:07.432061 kubelet[1440]: E1002 19:32:07.431830 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:32:07.463730 kubelet[1440]: I1002 19:32:07.463708 1440 scope.go:115] "RemoveContainer" containerID="170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93" Oct 2 19:32:07.464023 kubelet[1440]: I1002 19:32:07.464006 1440 scope.go:115] "RemoveContainer" containerID="170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93" Oct 2 19:32:07.465293 env[1138]: time="2023-10-02T19:32:07.465264651Z" level=info msg="RemoveContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" Oct 2 19:32:07.465649 env[1138]: time="2023-10-02T19:32:07.465275611Z" level=info msg="RemoveContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\"" Oct 2 19:32:07.465840 env[1138]: time="2023-10-02T19:32:07.465780451Z" level=error msg="RemoveContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\" failed" error="failed to set removing state for container \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\": container is already in removing state" Oct 2 19:32:07.466202 kubelet[1440]: E1002 19:32:07.466180 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\": container is already in removing state" containerID="170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93" Oct 2 19:32:07.466279 kubelet[1440]: E1002 19:32:07.466217 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93": container is already in removing state; Skipping pod "cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)" Oct 2 19:32:07.466370 kubelet[1440]: E1002 19:32:07.466354 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:07.466873 kubelet[1440]: E1002 19:32:07.466848 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:32:07.467773 env[1138]: time="2023-10-02T19:32:07.467703731Z" level=info msg="RemoveContainer for \"170867c52939ddf44bb2a8e4f0e908cc8633eddc85c81c42e437997e1bdc4f93\" returns successfully" Oct 2 19:32:08.171593 kubelet[1440]: E1002 19:32:08.171516 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:09.172617 kubelet[1440]: E1002 19:32:09.172576 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.173592 kubelet[1440]: E1002 19:32:10.173563 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.526428 kubelet[1440]: W1002 19:32:10.526315 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice/cri-containerd-ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791.scope WatchSource:0}: task ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791 not found: not found Oct 2 19:32:11.173974 kubelet[1440]: E1002 19:32:11.173950 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:12.174554 kubelet[1440]: E1002 19:32:12.174520 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.175488 kubelet[1440]: E1002 19:32:13.175450 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.176661 kubelet[1440]: E1002 19:32:14.176613 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.116763 kubelet[1440]: E1002 19:32:15.116724 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:15.177136 kubelet[1440]: E1002 19:32:15.177105 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.177900 kubelet[1440]: E1002 19:32:16.177839 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.178514 kubelet[1440]: E1002 19:32:17.178474 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.178834 kubelet[1440]: E1002 19:32:18.178779 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.179332 kubelet[1440]: E1002 19:32:19.179231 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.179729 kubelet[1440]: E1002 19:32:20.179672 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.340494 kubelet[1440]: E1002 19:32:20.340461 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:20.340694 kubelet[1440]: E1002 19:32:20.340674 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:32:21.180235 kubelet[1440]: E1002 19:32:21.180185 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.181084 kubelet[1440]: E1002 19:32:22.181042 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.181728 kubelet[1440]: E1002 19:32:23.181685 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.182113 kubelet[1440]: E1002 19:32:24.182078 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.182310 kubelet[1440]: E1002 19:32:25.182276 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:26.183078 kubelet[1440]: E1002 19:32:26.183045 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.184331 kubelet[1440]: E1002 19:32:27.184258 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.185031 kubelet[1440]: E1002 19:32:28.184993 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:29.186406 kubelet[1440]: E1002 19:32:29.186349 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.187027 kubelet[1440]: E1002 19:32:30.186971 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.187871 kubelet[1440]: E1002 19:32:31.187828 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.340036 kubelet[1440]: E1002 19:32:31.340002 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:31.340247 kubelet[1440]: E1002 19:32:31.340228 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:32:32.188885 kubelet[1440]: E1002 19:32:32.188849 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.189852 kubelet[1440]: E1002 19:32:33.189814 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.190625 kubelet[1440]: E1002 19:32:34.190587 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.117247 kubelet[1440]: E1002 19:32:35.117205 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.191092 kubelet[1440]: E1002 19:32:35.191045 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.191468 kubelet[1440]: E1002 19:32:36.191435 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:37.192876 kubelet[1440]: E1002 19:32:37.192840 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.193316 kubelet[1440]: E1002 19:32:38.193281 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.339939 kubelet[1440]: E1002 19:32:38.339909 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:39.194868 kubelet[1440]: E1002 19:32:39.194833 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:40.195748 kubelet[1440]: E1002 19:32:40.195712 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:41.197027 kubelet[1440]: E1002 19:32:41.196968 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.197746 kubelet[1440]: E1002 19:32:42.197708 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:43.198892 kubelet[1440]: E1002 19:32:43.198806 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.199814 kubelet[1440]: E1002 19:32:44.199759 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:45.200280 kubelet[1440]: E1002 19:32:45.200211 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:45.340900 kubelet[1440]: E1002 19:32:45.340862 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:32:45.341244 kubelet[1440]: E1002 19:32:45.341079 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:32:46.200840 kubelet[1440]: E1002 19:32:46.200779 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:47.201495 kubelet[1440]: E1002 19:32:47.201420 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:48.202511 kubelet[1440]: E1002 19:32:48.202459 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:49.203424 kubelet[1440]: E1002 19:32:49.203378 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.204090 kubelet[1440]: E1002 19:32:50.204054 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:51.204829 kubelet[1440]: E1002 19:32:51.204766 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:52.205797 kubelet[1440]: E1002 19:32:52.205727 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.206692 kubelet[1440]: E1002 19:32:53.206640 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:54.207399 kubelet[1440]: E1002 19:32:54.207336 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.116744 kubelet[1440]: E1002 19:32:55.116699 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.198456 kubelet[1440]: E1002 19:32:55.198413 1440 kubelet_node_status.go:452] "Node not becoming ready in time after startup" Oct 2 19:32:55.207886 kubelet[1440]: E1002 19:32:55.207835 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.213523 kubelet[1440]: E1002 19:32:55.213484 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:56.208000 kubelet[1440]: E1002 19:32:56.207947 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:57.208401 kubelet[1440]: E1002 19:32:57.208331 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:58.208834 kubelet[1440]: E1002 19:32:58.208782 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:59.209464 kubelet[1440]: E1002 19:32:59.209398 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.209764 kubelet[1440]: E1002 19:33:00.209723 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.214315 kubelet[1440]: E1002 19:33:00.214295 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:00.340654 kubelet[1440]: E1002 19:33:00.340620 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:00.342596 env[1138]: time="2023-10-02T19:33:00.342554162Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:33:00.350488 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3006280062.mount: Deactivated successfully. Oct 2 19:33:00.353477 env[1138]: time="2023-10-02T19:33:00.353372646Z" level=info msg="CreateContainer within sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\"" Oct 2 19:33:00.354046 env[1138]: time="2023-10-02T19:33:00.354020248Z" level=info msg="StartContainer for \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\"" Oct 2 19:33:00.370163 systemd[1]: Started cri-containerd-053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583.scope. Oct 2 19:33:00.398792 systemd[1]: cri-containerd-053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583.scope: Deactivated successfully. Oct 2 19:33:00.406706 env[1138]: time="2023-10-02T19:33:00.406656544Z" level=info msg="shim disconnected" id=053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583 Oct 2 19:33:00.406706 env[1138]: time="2023-10-02T19:33:00.406707424Z" level=warning msg="cleaning up after shim disconnected" id=053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583 namespace=k8s.io Oct 2 19:33:00.406880 env[1138]: time="2023-10-02T19:33:00.406717264Z" level=info msg="cleaning up dead shim" Oct 2 19:33:00.414833 env[1138]: time="2023-10-02T19:33:00.414784897Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1961 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:00.415108 env[1138]: time="2023-10-02T19:33:00.415040898Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:33:00.415277 env[1138]: time="2023-10-02T19:33:00.415231019Z" level=error msg="Failed to pipe stdout of container \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\"" error="reading from a closed fifo" Oct 2 19:33:00.415491 env[1138]: time="2023-10-02T19:33:00.415459260Z" level=error msg="Failed to pipe stderr of container \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\"" error="reading from a closed fifo" Oct 2 19:33:00.416976 env[1138]: time="2023-10-02T19:33:00.416928906Z" level=error msg="StartContainer for \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:00.417160 kubelet[1440]: E1002 19:33:00.417140 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583" Oct 2 19:33:00.417256 kubelet[1440]: E1002 19:33:00.417239 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:00.417256 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:00.417256 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:33:00.417256 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-z878n,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:00.417447 kubelet[1440]: E1002 19:33:00.417277 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:00.540216 kubelet[1440]: I1002 19:33:00.540117 1440 scope.go:115] "RemoveContainer" containerID="ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791" Oct 2 19:33:00.540587 kubelet[1440]: I1002 19:33:00.540437 1440 scope.go:115] "RemoveContainer" containerID="ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791" Oct 2 19:33:00.541084 env[1138]: time="2023-10-02T19:33:00.541037973Z" level=info msg="RemoveContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" Oct 2 19:33:00.543505 env[1138]: time="2023-10-02T19:33:00.543414263Z" level=info msg="RemoveContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\"" Oct 2 19:33:00.543625 env[1138]: time="2023-10-02T19:33:00.543520303Z" level=error msg="RemoveContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\" failed" error="rpc error: code = NotFound desc = get container info: container \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\" in namespace \"k8s.io\": not found" Oct 2 19:33:00.544412 env[1138]: time="2023-10-02T19:33:00.543664344Z" level=info msg="RemoveContainer for \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\" returns successfully" Oct 2 19:33:00.544589 kubelet[1440]: E1002 19:33:00.544553 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = NotFound desc = get container info: container \"ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791\" in namespace \"k8s.io\": not found" containerID="ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791" Oct 2 19:33:00.544589 kubelet[1440]: E1002 19:33:00.544582 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = NotFound desc = get container info: container "ab0b27b6ac1a4ae69923448502118e5c043ef86e2d1381471e779b72c4cd4791" in namespace "k8s.io": not found; Skipping pod "cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)" Oct 2 19:33:00.544669 kubelet[1440]: E1002 19:33:00.544648 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:00.544911 kubelet[1440]: E1002 19:33:00.544882 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:01.210855 kubelet[1440]: E1002 19:33:01.210802 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:01.348275 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583-rootfs.mount: Deactivated successfully. Oct 2 19:33:02.211263 kubelet[1440]: E1002 19:33:02.211216 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.211563 kubelet[1440]: E1002 19:33:03.211527 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.510770 kubelet[1440]: W1002 19:33:03.510660 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice/cri-containerd-053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583.scope WatchSource:0}: task 053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583 not found: not found Oct 2 19:33:04.212464 kubelet[1440]: E1002 19:33:04.211972 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.212779 kubelet[1440]: E1002 19:33:05.212746 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.215560 kubelet[1440]: E1002 19:33:05.215540 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:06.213609 kubelet[1440]: E1002 19:33:06.213570 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:07.214476 kubelet[1440]: E1002 19:33:07.214438 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.215358 kubelet[1440]: E1002 19:33:08.215292 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.216396 kubelet[1440]: E1002 19:33:09.216333 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.216577 kubelet[1440]: E1002 19:33:10.216548 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:10.216577 kubelet[1440]: E1002 19:33:10.216583 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:11.217561 kubelet[1440]: E1002 19:33:11.217513 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.218093 kubelet[1440]: E1002 19:33:12.218047 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.340757 kubelet[1440]: E1002 19:33:12.340727 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:12.340954 kubelet[1440]: E1002 19:33:12.340940 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:13.219052 kubelet[1440]: E1002 19:33:13.218984 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:14.219939 kubelet[1440]: E1002 19:33:14.219900 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.118585 kubelet[1440]: E1002 19:33:15.118550 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.218001 kubelet[1440]: E1002 19:33:15.217972 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:15.220236 kubelet[1440]: E1002 19:33:15.220212 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.221171 kubelet[1440]: E1002 19:33:16.221132 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.222149 kubelet[1440]: E1002 19:33:17.222114 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.223280 kubelet[1440]: E1002 19:33:18.223244 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.224721 kubelet[1440]: E1002 19:33:19.224673 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:20.222488 kubelet[1440]: E1002 19:33:20.222453 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:20.225599 kubelet[1440]: E1002 19:33:20.225574 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.226004 kubelet[1440]: E1002 19:33:21.225966 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.226530 kubelet[1440]: E1002 19:33:22.226474 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.227498 kubelet[1440]: E1002 19:33:23.227456 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.340730 kubelet[1440]: E1002 19:33:23.340699 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:23.340930 kubelet[1440]: E1002 19:33:23.340915 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:24.228794 kubelet[1440]: E1002 19:33:24.228748 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.223846 kubelet[1440]: E1002 19:33:25.223817 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:25.230174 kubelet[1440]: E1002 19:33:25.230155 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:26.230933 kubelet[1440]: E1002 19:33:26.230904 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:27.232083 kubelet[1440]: E1002 19:33:27.232053 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:28.233409 kubelet[1440]: E1002 19:33:28.233345 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:29.233765 kubelet[1440]: E1002 19:33:29.233730 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:30.230495 kubelet[1440]: E1002 19:33:30.225026 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:30.234214 kubelet[1440]: E1002 19:33:30.234165 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:31.234537 kubelet[1440]: E1002 19:33:31.234474 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:32.234921 kubelet[1440]: E1002 19:33:32.234864 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:33.235674 kubelet[1440]: E1002 19:33:33.235616 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:34.236805 kubelet[1440]: E1002 19:33:34.236738 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:34.340838 kubelet[1440]: E1002 19:33:34.340789 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:34.341024 kubelet[1440]: E1002 19:33:34.341003 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:35.116897 kubelet[1440]: E1002 19:33:35.116822 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:35.225938 kubelet[1440]: E1002 19:33:35.225901 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:35.237106 kubelet[1440]: E1002 19:33:35.237071 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:36.237545 kubelet[1440]: E1002 19:33:36.237495 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:37.238586 kubelet[1440]: E1002 19:33:37.238529 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:38.239564 kubelet[1440]: E1002 19:33:38.239523 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:39.240551 kubelet[1440]: E1002 19:33:39.240508 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:40.226915 kubelet[1440]: E1002 19:33:40.226877 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:40.241107 kubelet[1440]: E1002 19:33:40.241075 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:41.241866 kubelet[1440]: E1002 19:33:41.241824 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:42.242855 kubelet[1440]: E1002 19:33:42.242787 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:43.243299 kubelet[1440]: E1002 19:33:43.243266 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:44.244671 kubelet[1440]: E1002 19:33:44.244602 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.227831 kubelet[1440]: E1002 19:33:45.227805 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:45.245260 kubelet[1440]: E1002 19:33:45.245231 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:45.340452 kubelet[1440]: E1002 19:33:45.340416 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:45.340812 kubelet[1440]: E1002 19:33:45.340795 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:46.246212 kubelet[1440]: E1002 19:33:46.246174 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:47.246852 kubelet[1440]: E1002 19:33:47.246817 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:48.248228 kubelet[1440]: E1002 19:33:48.248162 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:49.249076 kubelet[1440]: E1002 19:33:49.248979 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:50.229112 kubelet[1440]: E1002 19:33:50.229075 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:50.249542 kubelet[1440]: E1002 19:33:50.249512 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:51.250649 kubelet[1440]: E1002 19:33:51.250611 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:52.251904 kubelet[1440]: E1002 19:33:52.251870 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:53.253361 kubelet[1440]: E1002 19:33:53.253299 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:54.254459 kubelet[1440]: E1002 19:33:54.254369 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.116368 kubelet[1440]: E1002 19:33:55.116311 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:55.229999 kubelet[1440]: E1002 19:33:55.229970 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:55.255243 kubelet[1440]: E1002 19:33:55.255206 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.255656 kubelet[1440]: E1002 19:33:56.255608 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:56.340717 kubelet[1440]: E1002 19:33:56.340683 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:56.340908 kubelet[1440]: E1002 19:33:56.340895 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:33:57.255817 kubelet[1440]: E1002 19:33:57.255775 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.256503 kubelet[1440]: E1002 19:33:58.256428 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:58.340564 kubelet[1440]: E1002 19:33:58.340531 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:33:59.256582 kubelet[1440]: E1002 19:33:59.256545 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:00.231196 kubelet[1440]: E1002 19:34:00.231173 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:00.257814 kubelet[1440]: E1002 19:34:00.257793 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:01.258493 kubelet[1440]: E1002 19:34:01.258455 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:02.259539 kubelet[1440]: E1002 19:34:02.259481 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:03.260203 kubelet[1440]: E1002 19:34:03.260159 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:04.260673 kubelet[1440]: E1002 19:34:04.260607 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:05.232181 kubelet[1440]: E1002 19:34:05.232156 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:05.261597 kubelet[1440]: E1002 19:34:05.261561 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:06.262076 kubelet[1440]: E1002 19:34:06.262034 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:07.263330 kubelet[1440]: E1002 19:34:07.263235 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:08.264249 kubelet[1440]: E1002 19:34:08.264197 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:09.264679 kubelet[1440]: E1002 19:34:09.264644 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:10.233295 kubelet[1440]: E1002 19:34:10.233270 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:10.265826 kubelet[1440]: E1002 19:34:10.265784 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:10.340355 kubelet[1440]: E1002 19:34:10.340319 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:10.340614 kubelet[1440]: E1002 19:34:10.340598 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-lhssz_kube-system(0c4d5228-dda2-4408-861d-6ecd5514d1a3)\"" pod="kube-system/cilium-lhssz" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 Oct 2 19:34:11.266224 kubelet[1440]: E1002 19:34:11.266172 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:12.266605 kubelet[1440]: E1002 19:34:12.266574 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:13.267226 kubelet[1440]: E1002 19:34:13.267193 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:14.270454 kubelet[1440]: E1002 19:34:14.268604 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.116702 kubelet[1440]: E1002 19:34:15.116665 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.234284 kubelet[1440]: E1002 19:34:15.234244 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:15.268760 kubelet[1440]: E1002 19:34:15.268722 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:15.790831 env[1138]: time="2023-10-02T19:34:15.790197154Z" level=info msg="StopPodSandbox for \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\"" Oct 2 19:34:15.790831 env[1138]: time="2023-10-02T19:34:15.790256834Z" level=info msg="Container to stop \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:34:15.792346 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e-shm.mount: Deactivated successfully. Oct 2 19:34:15.800000 audit: BPF prog-id=67 op=UNLOAD Oct 2 19:34:15.801952 systemd[1]: cri-containerd-f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e.scope: Deactivated successfully. Oct 2 19:34:15.802825 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:34:15.802880 kernel: audit: type=1334 audit(1696275255.800:664): prog-id=67 op=UNLOAD Oct 2 19:34:15.807000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:34:15.809439 kernel: audit: type=1334 audit(1696275255.807:665): prog-id=71 op=UNLOAD Oct 2 19:34:15.822763 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e-rootfs.mount: Deactivated successfully. Oct 2 19:34:15.833627 env[1138]: time="2023-10-02T19:34:15.833576732Z" level=info msg="shim disconnected" id=f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e Oct 2 19:34:15.833627 env[1138]: time="2023-10-02T19:34:15.833626332Z" level=warning msg="cleaning up after shim disconnected" id=f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e namespace=k8s.io Oct 2 19:34:15.833833 env[1138]: time="2023-10-02T19:34:15.833636573Z" level=info msg="cleaning up dead shim" Oct 2 19:34:15.843244 env[1138]: time="2023-10-02T19:34:15.843196465Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1997 runtime=io.containerd.runc.v2\n" Oct 2 19:34:15.843678 env[1138]: time="2023-10-02T19:34:15.843634106Z" level=info msg="TearDown network for sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" successfully" Oct 2 19:34:15.843678 env[1138]: time="2023-10-02T19:34:15.843664026Z" level=info msg="StopPodSandbox for \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" returns successfully" Oct 2 19:34:15.938861 kubelet[1440]: I1002 19:34:15.938787 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-xtables-lock\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.938861 kubelet[1440]: I1002 19:34:15.938842 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-config-path\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.938861 kubelet[1440]: I1002 19:34:15.938869 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-kernel\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938893 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-lib-modules\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938914 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z878n\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-kube-api-access-z878n\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938942 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hostproc\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938960 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-etc-cni-netd\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938976 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-cgroup\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939328 kubelet[1440]: I1002 19:34:15.938992 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-bpf-maps\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939017 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cni-path\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939036 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-net\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939053 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-run\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939072 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hubble-tls\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939100 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c4d5228-dda2-4408-861d-6ecd5514d1a3-clustermesh-secrets\") pod \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\" (UID: \"0c4d5228-dda2-4408-861d-6ecd5514d1a3\") " Oct 2 19:34:15.939515 kubelet[1440]: I1002 19:34:15.939181 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939648 kubelet[1440]: I1002 19:34:15.939217 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939648 kubelet[1440]: I1002 19:34:15.939424 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939648 kubelet[1440]: I1002 19:34:15.939446 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939648 kubelet[1440]: I1002 19:34:15.939461 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cni-path" (OuterVolumeSpecName: "cni-path") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939648 kubelet[1440]: I1002 19:34:15.939476 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939764 kubelet[1440]: I1002 19:34:15.939498 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939764 kubelet[1440]: I1002 19:34:15.939726 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.939811 kubelet[1440]: I1002 19:34:15.939770 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.941681 kubelet[1440]: W1002 19:34:15.939874 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/0c4d5228-dda2-4408-861d-6ecd5514d1a3/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:34:15.941681 kubelet[1440]: I1002 19:34:15.939897 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hostproc" (OuterVolumeSpecName: "hostproc") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:34:15.941681 kubelet[1440]: I1002 19:34:15.941636 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:34:15.943394 systemd[1]: var-lib-kubelet-pods-0c4d5228\x2ddda2\x2d4408\x2d861d\x2d6ecd5514d1a3-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dz878n.mount: Deactivated successfully. Oct 2 19:34:15.943503 systemd[1]: var-lib-kubelet-pods-0c4d5228\x2ddda2\x2d4408\x2d861d\x2d6ecd5514d1a3-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:34:15.944616 kubelet[1440]: I1002 19:34:15.944590 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0c4d5228-dda2-4408-861d-6ecd5514d1a3-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:34:15.944740 kubelet[1440]: I1002 19:34:15.944702 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-kube-api-access-z878n" (OuterVolumeSpecName: "kube-api-access-z878n") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "kube-api-access-z878n". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:15.945642 kubelet[1440]: I1002 19:34:15.945619 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "0c4d5228-dda2-4408-861d-6ecd5514d1a3" (UID: "0c4d5228-dda2-4408-861d-6ecd5514d1a3"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:34:15.945804 systemd[1]: var-lib-kubelet-pods-0c4d5228\x2ddda2\x2d4408\x2d861d\x2d6ecd5514d1a3-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040048 1440 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040085 1440 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0c4d5228-dda2-4408-861d-6ecd5514d1a3-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040097 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040106 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040116 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040125 1440 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040136 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-z878n\" (UniqueName: \"kubernetes.io/projected/0c4d5228-dda2-4408-861d-6ecd5514d1a3-kube-api-access-z878n\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040609 kubelet[1440]: I1002 19:34:16.040146 1440 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040154 1440 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040164 1440 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040173 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040181 1440 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040189 1440 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.040971 kubelet[1440]: I1002 19:34:16.040198 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0c4d5228-dda2-4408-861d-6ecd5514d1a3-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:34:16.269912 kubelet[1440]: E1002 19:34:16.269839 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:16.661876 kubelet[1440]: I1002 19:34:16.661839 1440 scope.go:115] "RemoveContainer" containerID="053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583" Oct 2 19:34:16.663365 env[1138]: time="2023-10-02T19:34:16.663318849Z" level=info msg="RemoveContainer for \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\"" Oct 2 19:34:16.665851 systemd[1]: Removed slice kubepods-burstable-pod0c4d5228_dda2_4408_861d_6ecd5514d1a3.slice. Oct 2 19:34:16.666560 env[1138]: time="2023-10-02T19:34:16.666530333Z" level=info msg="RemoveContainer for \"053073a7d2a5c751874e4ce4ffeb83e8b6b793bf032de29a69e984651d7af583\" returns successfully" Oct 2 19:34:17.272409 kubelet[1440]: E1002 19:34:17.270622 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:17.343398 kubelet[1440]: I1002 19:34:17.342902 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=0c4d5228-dda2-4408-861d-6ecd5514d1a3 path="/var/lib/kubelet/pods/0c4d5228-dda2-4408-861d-6ecd5514d1a3/volumes" Oct 2 19:34:18.271556 kubelet[1440]: E1002 19:34:18.271495 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:19.271928 kubelet[1440]: E1002 19:34:19.271862 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:19.364706 kubelet[1440]: I1002 19:34:19.364667 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:34:19.364706 kubelet[1440]: E1002 19:34:19.364712 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: E1002 19:34:19.364722 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: E1002 19:34:19.364730 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: E1002 19:34:19.364737 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: I1002 19:34:19.364751 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: I1002 19:34:19.364758 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: I1002 19:34:19.364764 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: I1002 19:34:19.364770 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.364891 kubelet[1440]: I1002 19:34:19.364775 1440 memory_manager.go:346] "RemoveStaleState removing state" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.365242 kubelet[1440]: I1002 19:34:19.365222 1440 topology_manager.go:210] "Topology Admit Handler" Oct 2 19:34:19.365337 kubelet[1440]: E1002 19:34:19.365326 1440 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="0c4d5228-dda2-4408-861d-6ecd5514d1a3" containerName="mount-cgroup" Oct 2 19:34:19.370419 systemd[1]: Created slice kubepods-besteffort-pod543d79be_ad1f_439b_a3c6_e043a0b4846e.slice. Oct 2 19:34:19.375632 systemd[1]: Created slice kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice. Oct 2 19:34:19.461569 kubelet[1440]: I1002 19:34:19.461519 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-net\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461569 kubelet[1440]: I1002 19:34:19.461573 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6qpt\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-kube-api-access-l6qpt\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461747 kubelet[1440]: I1002 19:34:19.461596 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-lib-modules\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461775 kubelet[1440]: I1002 19:34:19.461735 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-clustermesh-secrets\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461798 kubelet[1440]: I1002 19:34:19.461783 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-config-path\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461830 kubelet[1440]: I1002 19:34:19.461806 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-bpf-maps\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461854 kubelet[1440]: I1002 19:34:19.461837 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-xtables-lock\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461878 kubelet[1440]: I1002 19:34:19.461860 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-hostproc\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461903 kubelet[1440]: I1002 19:34:19.461880 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cni-path\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461926 kubelet[1440]: I1002 19:34:19.461905 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-etc-cni-netd\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461926 kubelet[1440]: I1002 19:34:19.461925 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-hubble-tls\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461975 kubelet[1440]: I1002 19:34:19.461950 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-run\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.461975 kubelet[1440]: I1002 19:34:19.461970 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-cgroup\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.462022 kubelet[1440]: I1002 19:34:19.461990 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-ipsec-secrets\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.462047 kubelet[1440]: I1002 19:34:19.462025 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-kernel\") pod \"cilium-54h4h\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " pod="kube-system/cilium-54h4h" Oct 2 19:34:19.462073 kubelet[1440]: I1002 19:34:19.462049 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543d79be-ad1f-439b-a3c6-e043a0b4846e-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-z74f8\" (UID: \"543d79be-ad1f-439b-a3c6-e043a0b4846e\") " pod="kube-system/cilium-operator-f59cbd8c6-z74f8" Oct 2 19:34:19.462073 kubelet[1440]: I1002 19:34:19.462069 1440 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-psvrc\" (UniqueName: \"kubernetes.io/projected/543d79be-ad1f-439b-a3c6-e043a0b4846e-kube-api-access-psvrc\") pod \"cilium-operator-f59cbd8c6-z74f8\" (UID: \"543d79be-ad1f-439b-a3c6-e043a0b4846e\") " pod="kube-system/cilium-operator-f59cbd8c6-z74f8" Oct 2 19:34:19.673324 kubelet[1440]: E1002 19:34:19.673294 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:19.674051 env[1138]: time="2023-10-02T19:34:19.673721293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-z74f8,Uid:543d79be-ad1f-439b-a3c6-e043a0b4846e,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:19.685771 env[1138]: time="2023-10-02T19:34:19.685193708Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:19.685771 env[1138]: time="2023-10-02T19:34:19.685235588Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:19.685771 env[1138]: time="2023-10-02T19:34:19.685246068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:19.686122 env[1138]: time="2023-10-02T19:34:19.685413628Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d pid=2025 runtime=io.containerd.runc.v2 Oct 2 19:34:19.688419 kubelet[1440]: E1002 19:34:19.687253 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:19.688504 env[1138]: time="2023-10-02T19:34:19.687680671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54h4h,Uid:dd616647-db4c-41b2-b917-d5695fc46f4e,Namespace:kube-system,Attempt:0,}" Oct 2 19:34:19.697843 systemd[1]: Started cri-containerd-ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d.scope. Oct 2 19:34:19.704409 env[1138]: time="2023-10-02T19:34:19.703817732Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:34:19.704409 env[1138]: time="2023-10-02T19:34:19.703868252Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:34:19.704409 env[1138]: time="2023-10-02T19:34:19.703878372Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:34:19.704409 env[1138]: time="2023-10-02T19:34:19.704371333Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6 pid=2050 runtime=io.containerd.runc.v2 Oct 2 19:34:19.718507 systemd[1]: Started cri-containerd-0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6.scope. Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741579 kernel: audit: type=1400 audit(1696275259.736:666): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741632 kernel: audit: type=1400 audit(1696275259.736:667): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741650 kernel: audit: type=1400 audit(1696275259.736:668): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.745017 kernel: audit: type=1400 audit(1696275259.736:669): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.745067 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:34:19.745093 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:34:19.745111 kernel: audit: type=1400 audit(1696275259.736:670): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.745130 kernel: audit: audit_lost=3 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.736000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.738000 audit: BPF prog-id=78 op=LOAD Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.739000 audit: BPF prog-id=79 op=LOAD Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=400013db38 a2=10 a3=0 items=0 ppid=2025 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633438373436376662383339323865623261623661643437396162 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=400013d5a0 a2=3c a3=0 items=0 ppid=2025 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633438373436376662383339323865623261623661643437396162 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2050 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061316166316434613933613566666232613539323636303732646264 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2050 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061316166316434613933613566666232613539323636303732646264 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.740000 audit: BPF prog-id=80 op=LOAD Oct 2 19:34:19.740000 audit[2034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400013d8e0 a2=78 a3=0 items=0 ppid=2025 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633438373436376662383339323865623261623661643437396162 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.741000 audit: BPF prog-id=82 op=LOAD Oct 2 19:34:19.741000 audit[2034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=400013d670 a2=78 a3=0 items=0 ppid=2025 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633438373436376662383339323865623261623661643437396162 Oct 2 19:34:19.746000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:34:19.746000 audit: BPF prog-id=80 op=UNLOAD Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { perfmon } for pid=2034 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit[2034]: AVC avc: denied { bpf } for pid=2034 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.746000 audit: BPF prog-id=83 op=LOAD Oct 2 19:34:19.746000 audit[2034]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=400013db40 a2=78 a3=0 items=0 ppid=2025 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.746000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6464633438373436376662383339323865623261623661643437396162 Oct 2 19:34:19.740000 audit[2060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2050 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061316166316434613933613566666232613539323636303732646264 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.748000 audit: BPF prog-id=84 op=LOAD Oct 2 19:34:19.748000 audit[2060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2050 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.748000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061316166316434613933613566666232613539323636303732646264 Oct 2 19:34:19.749000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:34:19.749000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { perfmon } for pid=2060 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit[2060]: AVC avc: denied { bpf } for pid=2060 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:19.749000 audit: BPF prog-id=85 op=LOAD Oct 2 19:34:19.749000 audit[2060]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2050 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:19.749000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061316166316434613933613566666232613539323636303732646264 Oct 2 19:34:19.765368 env[1138]: time="2023-10-02T19:34:19.765310933Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-54h4h,Uid:dd616647-db4c-41b2-b917-d5695fc46f4e,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\"" Oct 2 19:34:19.766738 kubelet[1440]: E1002 19:34:19.766598 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:19.770342 env[1138]: time="2023-10-02T19:34:19.770296500Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:34:19.771580 env[1138]: time="2023-10-02T19:34:19.771533382Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-z74f8,Uid:543d79be-ad1f-439b-a3c6-e043a0b4846e,Namespace:kube-system,Attempt:0,} returns sandbox id \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\"" Oct 2 19:34:19.772023 kubelet[1440]: E1002 19:34:19.771993 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:19.773118 env[1138]: time="2023-10-02T19:34:19.773052744Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Oct 2 19:34:19.789602 env[1138]: time="2023-10-02T19:34:19.789539805Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" Oct 2 19:34:19.790667 env[1138]: time="2023-10-02T19:34:19.790007166Z" level=info msg="StartContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" Oct 2 19:34:19.805584 systemd[1]: Started cri-containerd-8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1.scope. Oct 2 19:34:19.827497 systemd[1]: cri-containerd-8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1.scope: Deactivated successfully. Oct 2 19:34:19.841690 env[1138]: time="2023-10-02T19:34:19.841631394Z" level=info msg="shim disconnected" id=8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1 Oct 2 19:34:19.841690 env[1138]: time="2023-10-02T19:34:19.841683394Z" level=warning msg="cleaning up after shim disconnected" id=8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1 namespace=k8s.io Oct 2 19:34:19.841690 env[1138]: time="2023-10-02T19:34:19.841693914Z" level=info msg="cleaning up dead shim" Oct 2 19:34:19.850690 env[1138]: time="2023-10-02T19:34:19.850642006Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:19Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2122 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:19Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:19.850957 env[1138]: time="2023-10-02T19:34:19.850900286Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Oct 2 19:34:19.851831 env[1138]: time="2023-10-02T19:34:19.851443287Z" level=error msg="Failed to pipe stderr of container \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" error="reading from a closed fifo" Oct 2 19:34:19.852946 env[1138]: time="2023-10-02T19:34:19.852905609Z" level=error msg="Failed to pipe stdout of container \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" error="reading from a closed fifo" Oct 2 19:34:19.854562 env[1138]: time="2023-10-02T19:34:19.854519531Z" level=error msg="StartContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:19.854940 kubelet[1440]: E1002 19:34:19.854902 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1" Oct 2 19:34:19.855028 kubelet[1440]: E1002 19:34:19.855014 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:19.855028 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:19.855028 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:19.855028 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l6qpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:19.855243 kubelet[1440]: E1002 19:34:19.855050 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:20.236449 kubelet[1440]: E1002 19:34:20.235306 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:20.272932 kubelet[1440]: E1002 19:34:20.272863 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:20.662790 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4228071765.mount: Deactivated successfully. Oct 2 19:34:20.671353 kubelet[1440]: E1002 19:34:20.670861 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:20.672766 env[1138]: time="2023-10-02T19:34:20.672721165Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:34:20.689492 env[1138]: time="2023-10-02T19:34:20.689440427Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" Oct 2 19:34:20.690297 env[1138]: time="2023-10-02T19:34:20.690266068Z" level=info msg="StartContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" Oct 2 19:34:20.714997 systemd[1]: Started cri-containerd-85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145.scope. Oct 2 19:34:20.732311 systemd[1]: cri-containerd-85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145.scope: Deactivated successfully. Oct 2 19:34:20.746995 env[1138]: time="2023-10-02T19:34:20.746935543Z" level=info msg="shim disconnected" id=85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145 Oct 2 19:34:20.746995 env[1138]: time="2023-10-02T19:34:20.746990303Z" level=warning msg="cleaning up after shim disconnected" id=85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145 namespace=k8s.io Oct 2 19:34:20.746995 env[1138]: time="2023-10-02T19:34:20.747000023Z" level=info msg="cleaning up dead shim" Oct 2 19:34:20.754535 env[1138]: time="2023-10-02T19:34:20.754452832Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2158 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:20Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:20.754958 env[1138]: time="2023-10-02T19:34:20.754880953Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 19:34:20.755140 env[1138]: time="2023-10-02T19:34:20.755111913Z" level=error msg="Failed to pipe stdout of container \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" error="reading from a closed fifo" Oct 2 19:34:20.755500 env[1138]: time="2023-10-02T19:34:20.755453074Z" level=error msg="Failed to pipe stderr of container \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" error="reading from a closed fifo" Oct 2 19:34:20.758161 env[1138]: time="2023-10-02T19:34:20.758080157Z" level=error msg="StartContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:20.758336 kubelet[1440]: E1002 19:34:20.758305 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145" Oct 2 19:34:20.759225 kubelet[1440]: E1002 19:34:20.758448 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:20.759225 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:20.759225 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:20.759225 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l6qpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:20.759421 kubelet[1440]: E1002 19:34:20.758488 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:21.149895 env[1138]: time="2023-10-02T19:34:21.149830390Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:21.150872 env[1138]: time="2023-10-02T19:34:21.150832151Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:21.152431 env[1138]: time="2023-10-02T19:34:21.152397553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:34:21.152830 env[1138]: time="2023-10-02T19:34:21.152800194Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:59357949c22410bca94f8bb5a7a7f73d575949bc16ddc4bd8c740843d4254180\"" Oct 2 19:34:21.154921 env[1138]: time="2023-10-02T19:34:21.154892156Z" level=info msg="CreateContainer within sandbox \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:34:21.165328 env[1138]: time="2023-10-02T19:34:21.165277930Z" level=info msg="CreateContainer within sandbox \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\"" Oct 2 19:34:21.165871 env[1138]: time="2023-10-02T19:34:21.165839891Z" level=info msg="StartContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\"" Oct 2 19:34:21.181456 systemd[1]: Started cri-containerd-b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6.scope. Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.204641 kernel: kauditd_printk_skb: 112 callbacks suppressed Oct 2 19:34:21.204724 kernel: audit: type=1400 audit(1696275261.201:702): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.204755 kernel: audit: type=1400 audit(1696275261.201:703): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.206226 kernel: audit: type=1400 audit(1696275261.201:704): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.207862 kernel: audit: type=1400 audit(1696275261.201:705): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.211072 kernel: audit: type=1400 audit(1696275261.201:706): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.211117 kernel: audit: type=1400 audit(1696275261.201:707): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.212783 kernel: audit: type=1400 audit(1696275261.201:708): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.214374 kernel: audit: type=1400 audit(1696275261.201:709): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.216011 kernel: audit: type=1400 audit(1696275261.201:710): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.217597 kernel: audit: type=1400 audit(1696275261.201:711): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit: BPF prog-id=86 op=LOAD Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=0 a0=f a1=4000195b38 a2=10 a3=0 items=0 ppid=2025 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:21.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239303339663235333965343137366462636661363762386334326531 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=0 a1=40001955a0 a2=3c a3=0 items=0 ppid=2025 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:21.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239303339663235333965343137366462636661363762386334326531 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.201000 audit: BPF prog-id=87 op=LOAD Oct 2 19:34:21.201000 audit[2178]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=40001958e0 a2=78 a3=0 items=0 ppid=2025 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:21.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239303339663235333965343137366462636661363762386334326531 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.203000 audit: BPF prog-id=88 op=LOAD Oct 2 19:34:21.203000 audit[2178]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=18 a0=5 a1=4000195670 a2=78 a3=0 items=0 ppid=2025 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:21.203000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239303339663235333965343137366462636661363762386334326531 Oct 2 19:34:21.205000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:34:21.205000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { perfmon } for pid=2178 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit[2178]: AVC avc: denied { bpf } for pid=2178 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:34:21.205000 audit: BPF prog-id=89 op=LOAD Oct 2 19:34:21.205000 audit[2178]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=16 a0=5 a1=4000195b40 a2=78 a3=0 items=0 ppid=2025 pid=2178 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:34:21.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239303339663235333965343137366462636661363762386334326531 Oct 2 19:34:21.227089 env[1138]: time="2023-10-02T19:34:21.227043331Z" level=info msg="StartContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" returns successfully" Oct 2 19:34:21.273454 kubelet[1440]: E1002 19:34:21.273424 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:21.277000 audit[2189]: AVC avc: denied { map_create } for pid=2189 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c457,c897 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c457,c897 tclass=bpf permissive=0 Oct 2 19:34:21.277000 audit[2189]: SYSCALL arch=c00000b7 syscall=280 success=no exit=-13 a0=0 a1=400074f768 a2=48 a3=0 items=0 ppid=2025 pid=2189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c457,c897 key=(null) Oct 2 19:34:21.277000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:34:21.674541 kubelet[1440]: E1002 19:34:21.674510 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:21.677902 kubelet[1440]: I1002 19:34:21.677879 1440 scope.go:115] "RemoveContainer" containerID="8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1" Oct 2 19:34:21.678488 kubelet[1440]: I1002 19:34:21.678471 1440 scope.go:115] "RemoveContainer" containerID="8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1" Oct 2 19:34:21.680969 env[1138]: time="2023-10-02T19:34:21.680923162Z" level=info msg="RemoveContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" Oct 2 19:34:21.681393 env[1138]: time="2023-10-02T19:34:21.681350723Z" level=info msg="RemoveContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\"" Oct 2 19:34:21.681533 env[1138]: time="2023-10-02T19:34:21.681498683Z" level=error msg="RemoveContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\" failed" error="failed to set removing state for container \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\": container is already in removing state" Oct 2 19:34:21.681719 kubelet[1440]: E1002 19:34:21.681670 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\": container is already in removing state" containerID="8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1" Oct 2 19:34:21.681768 kubelet[1440]: E1002 19:34:21.681738 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1": container is already in removing state; Skipping pod "cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)" Oct 2 19:34:21.681813 kubelet[1440]: E1002 19:34:21.681802 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:21.682013 kubelet[1440]: E1002 19:34:21.681995 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:21.684141 env[1138]: time="2023-10-02T19:34:21.684097447Z" level=info msg="RemoveContainer for \"8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1\" returns successfully" Oct 2 19:34:21.700847 kubelet[1440]: I1002 19:34:21.700813 1440 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-z74f8" podStartSLOduration=-9.223372034154e+09 pod.CreationTimestamp="2023-10-02 19:34:19 +0000 UTC" firstStartedPulling="2023-10-02 19:34:19.772320023 +0000 UTC m=+205.714880472" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2023-10-02 19:34:21.683044925 +0000 UTC m=+207.625605334" watchObservedRunningTime="2023-10-02 19:34:21.700777228 +0000 UTC m=+207.643337677" Oct 2 19:34:22.273957 kubelet[1440]: E1002 19:34:22.273906 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:22.680731 kubelet[1440]: E1002 19:34:22.680702 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:22.680917 kubelet[1440]: E1002 19:34:22.680902 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:22.681200 kubelet[1440]: E1002 19:34:22.681184 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:22.948183 kubelet[1440]: W1002 19:34:22.948079 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice/cri-containerd-8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1.scope WatchSource:0}: container "8479c35b1a3104b3156a18483468743d6fb06e664170e66f4c17a77c190c62d1" in namespace "k8s.io": not found Oct 2 19:34:23.274381 kubelet[1440]: E1002 19:34:23.274277 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:24.274909 kubelet[1440]: E1002 19:34:24.274820 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:25.236677 kubelet[1440]: E1002 19:34:25.236650 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:25.275090 kubelet[1440]: E1002 19:34:25.275041 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:26.057704 kubelet[1440]: W1002 19:34:26.057650 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice/cri-containerd-85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145.scope WatchSource:0}: task 85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145 not found: not found Oct 2 19:34:26.275512 kubelet[1440]: E1002 19:34:26.275448 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:27.276175 kubelet[1440]: E1002 19:34:27.276098 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:28.277244 kubelet[1440]: E1002 19:34:28.277206 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:29.278685 kubelet[1440]: E1002 19:34:29.278622 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:30.238233 kubelet[1440]: E1002 19:34:30.238191 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:30.279400 kubelet[1440]: E1002 19:34:30.279365 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:31.280622 kubelet[1440]: E1002 19:34:31.280557 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:32.281113 kubelet[1440]: E1002 19:34:32.281067 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:33.282148 kubelet[1440]: E1002 19:34:33.282114 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:34.283480 kubelet[1440]: E1002 19:34:34.283425 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.117258 kubelet[1440]: E1002 19:34:35.117195 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.241999 kubelet[1440]: E1002 19:34:35.239052 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:35.284467 kubelet[1440]: E1002 19:34:35.284401 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:35.340460 kubelet[1440]: E1002 19:34:35.340420 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:35.342751 env[1138]: time="2023-10-02T19:34:35.342697251Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:34:35.351139 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount751498208.mount: Deactivated successfully. Oct 2 19:34:35.352526 env[1138]: time="2023-10-02T19:34:35.352467767Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" Oct 2 19:34:35.353408 env[1138]: time="2023-10-02T19:34:35.353112612Z" level=info msg="StartContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" Oct 2 19:34:35.370152 systemd[1]: Started cri-containerd-fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8.scope. Oct 2 19:34:35.387969 systemd[1]: cri-containerd-fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8.scope: Deactivated successfully. Oct 2 19:34:35.391627 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8-rootfs.mount: Deactivated successfully. Oct 2 19:34:35.546374 env[1138]: time="2023-10-02T19:34:35.546322306Z" level=info msg="shim disconnected" id=fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8 Oct 2 19:34:35.546374 env[1138]: time="2023-10-02T19:34:35.546375266Z" level=warning msg="cleaning up after shim disconnected" id=fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8 namespace=k8s.io Oct 2 19:34:35.546374 env[1138]: time="2023-10-02T19:34:35.546403106Z" level=info msg="cleaning up dead shim" Oct 2 19:34:35.555210 env[1138]: time="2023-10-02T19:34:35.555160214Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:34:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2235 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:34:35Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:34:35.555508 env[1138]: time="2023-10-02T19:34:35.555449776Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:34:35.555669 env[1138]: time="2023-10-02T19:34:35.555622498Z" level=error msg="Failed to pipe stdout of container \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" error="reading from a closed fifo" Oct 2 19:34:35.555718 env[1138]: time="2023-10-02T19:34:35.555639858Z" level=error msg="Failed to pipe stderr of container \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" error="reading from a closed fifo" Oct 2 19:34:35.558505 env[1138]: time="2023-10-02T19:34:35.558450720Z" level=error msg="StartContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:34:35.558851 kubelet[1440]: E1002 19:34:35.558824 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8" Oct 2 19:34:35.558959 kubelet[1440]: E1002 19:34:35.558943 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:34:35.558959 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:34:35.558959 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:34:35.558959 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l6qpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:34:35.559096 kubelet[1440]: E1002 19:34:35.558984 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:35.701099 kubelet[1440]: I1002 19:34:35.700239 1440 scope.go:115] "RemoveContainer" containerID="85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145" Oct 2 19:34:35.701353 kubelet[1440]: I1002 19:34:35.701338 1440 scope.go:115] "RemoveContainer" containerID="85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145" Oct 2 19:34:35.703326 env[1138]: time="2023-10-02T19:34:35.703279680Z" level=info msg="RemoveContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" Oct 2 19:34:35.705636 env[1138]: time="2023-10-02T19:34:35.705486297Z" level=info msg="RemoveContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\"" Oct 2 19:34:35.707348 env[1138]: time="2023-10-02T19:34:35.707288151Z" level=error msg="RemoveContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\" failed" error="failed to set removing state for container \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\": container is already in removing state" Oct 2 19:34:35.707793 kubelet[1440]: E1002 19:34:35.707771 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\": container is already in removing state" containerID="85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145" Oct 2 19:34:35.707864 kubelet[1440]: E1002 19:34:35.707815 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145": container is already in removing state; Skipping pod "cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)" Oct 2 19:34:35.707903 kubelet[1440]: E1002 19:34:35.707880 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:35.708154 kubelet[1440]: E1002 19:34:35.708136 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:35.710222 env[1138]: time="2023-10-02T19:34:35.710190733Z" level=info msg="RemoveContainer for \"85cc428cee412d1e05498f86de402958b7c9b38b2de0dd04bc74b3b2a9f6a145\" returns successfully" Oct 2 19:34:36.285482 kubelet[1440]: E1002 19:34:36.285447 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:37.286785 kubelet[1440]: E1002 19:34:37.286734 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.287161 kubelet[1440]: E1002 19:34:38.287120 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:38.652858 kubelet[1440]: W1002 19:34:38.652808 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice/cri-containerd-fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8.scope WatchSource:0}: task fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8 not found: not found Oct 2 19:34:39.287283 kubelet[1440]: E1002 19:34:39.287250 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:40.240610 kubelet[1440]: E1002 19:34:40.240581 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:40.288405 kubelet[1440]: E1002 19:34:40.288368 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:41.289320 kubelet[1440]: E1002 19:34:41.289283 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:42.290237 kubelet[1440]: E1002 19:34:42.290178 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:43.290696 kubelet[1440]: E1002 19:34:43.290654 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:44.291793 kubelet[1440]: E1002 19:34:44.291737 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:45.241787 kubelet[1440]: E1002 19:34:45.241722 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:45.292271 kubelet[1440]: E1002 19:34:45.292235 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:46.293307 kubelet[1440]: E1002 19:34:46.293260 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:47.293657 kubelet[1440]: E1002 19:34:47.293624 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:48.294788 kubelet[1440]: E1002 19:34:48.294741 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:49.295232 kubelet[1440]: E1002 19:34:49.295159 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:49.340299 kubelet[1440]: E1002 19:34:49.340231 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:34:49.340493 kubelet[1440]: E1002 19:34:49.340469 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:34:50.242761 kubelet[1440]: E1002 19:34:50.242729 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:50.296165 kubelet[1440]: E1002 19:34:50.296140 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:51.297586 kubelet[1440]: E1002 19:34:51.297556 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:52.298984 kubelet[1440]: E1002 19:34:52.298943 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:53.300204 kubelet[1440]: E1002 19:34:53.300162 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:54.300818 kubelet[1440]: E1002 19:34:54.300737 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.118589 kubelet[1440]: E1002 19:34:55.117801 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:55.142889 env[1138]: time="2023-10-02T19:34:55.142837653Z" level=info msg="StopPodSandbox for \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\"" Oct 2 19:34:55.143189 env[1138]: time="2023-10-02T19:34:55.142937534Z" level=info msg="TearDown network for sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" successfully" Oct 2 19:34:55.143189 env[1138]: time="2023-10-02T19:34:55.142971934Z" level=info msg="StopPodSandbox for \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" returns successfully" Oct 2 19:34:55.143437 env[1138]: time="2023-10-02T19:34:55.143407257Z" level=info msg="RemovePodSandbox for \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\"" Oct 2 19:34:55.143489 env[1138]: time="2023-10-02T19:34:55.143441417Z" level=info msg="Forcibly stopping sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\"" Oct 2 19:34:55.143532 env[1138]: time="2023-10-02T19:34:55.143520097Z" level=info msg="TearDown network for sandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" successfully" Oct 2 19:34:55.146441 env[1138]: time="2023-10-02T19:34:55.146397835Z" level=info msg="RemovePodSandbox \"f7b3ca3d648f07f4f479c9c4c10630f524f3f372da53e7cc5affd8d2f562732e\" returns successfully" Oct 2 19:34:55.243810 kubelet[1440]: E1002 19:34:55.243716 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:34:55.301223 kubelet[1440]: E1002 19:34:55.301179 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:56.302133 kubelet[1440]: E1002 19:34:56.302065 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:57.303054 kubelet[1440]: E1002 19:34:57.302980 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:58.304071 kubelet[1440]: E1002 19:34:58.304017 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:34:59.304293 kubelet[1440]: E1002 19:34:59.304255 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:00.244710 kubelet[1440]: E1002 19:35:00.244666 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:00.305803 kubelet[1440]: E1002 19:35:00.305770 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:01.307014 kubelet[1440]: E1002 19:35:01.306979 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.307996 kubelet[1440]: E1002 19:35:02.307942 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:02.340873 kubelet[1440]: E1002 19:35:02.340841 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:02.343120 env[1138]: time="2023-10-02T19:35:02.343080449Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:35:02.352216 env[1138]: time="2023-10-02T19:35:02.352156861Z" level=info msg="CreateContainer within sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\"" Oct 2 19:35:02.352739 env[1138]: time="2023-10-02T19:35:02.352698304Z" level=info msg="StartContainer for \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\"" Oct 2 19:35:02.368220 systemd[1]: Started cri-containerd-5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390.scope. Oct 2 19:35:02.371014 systemd[1]: run-containerd-runc-k8s.io-5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390-runc.2zKmQU.mount: Deactivated successfully. Oct 2 19:35:02.392084 systemd[1]: cri-containerd-5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390.scope: Deactivated successfully. Oct 2 19:35:02.403455 env[1138]: time="2023-10-02T19:35:02.403396752Z" level=info msg="shim disconnected" id=5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390 Oct 2 19:35:02.403654 env[1138]: time="2023-10-02T19:35:02.403463192Z" level=warning msg="cleaning up after shim disconnected" id=5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390 namespace=k8s.io Oct 2 19:35:02.403654 env[1138]: time="2023-10-02T19:35:02.403474672Z" level=info msg="cleaning up dead shim" Oct 2 19:35:02.411915 env[1138]: time="2023-10-02T19:35:02.411867440Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2278 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:35:02Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:35:02.412178 env[1138]: time="2023-10-02T19:35:02.412126081Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 19:35:02.412357 env[1138]: time="2023-10-02T19:35:02.412311643Z" level=error msg="Failed to pipe stdout of container \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\"" error="reading from a closed fifo" Oct 2 19:35:02.412809 env[1138]: time="2023-10-02T19:35:02.412758805Z" level=error msg="Failed to pipe stderr of container \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\"" error="reading from a closed fifo" Oct 2 19:35:02.414488 env[1138]: time="2023-10-02T19:35:02.414434855Z" level=error msg="StartContainer for \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:35:02.414710 kubelet[1440]: E1002 19:35:02.414688 1440 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390" Oct 2 19:35:02.414820 kubelet[1440]: E1002 19:35:02.414805 1440 kuberuntime_manager.go:872] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:35:02.414820 kubelet[1440]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:35:02.414820 kubelet[1440]: rm /hostbin/cilium-mount Oct 2 19:35:02.414820 kubelet[1440]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-l6qpt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:35:02.414959 kubelet[1440]: E1002 19:35:02.414850 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:35:02.744994 kubelet[1440]: I1002 19:35:02.744939 1440 scope.go:115] "RemoveContainer" containerID="fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8" Oct 2 19:35:02.745508 kubelet[1440]: I1002 19:35:02.745489 1440 scope.go:115] "RemoveContainer" containerID="fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8" Oct 2 19:35:02.745749 env[1138]: time="2023-10-02T19:35:02.745719017Z" level=info msg="RemoveContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" Oct 2 19:35:02.746512 env[1138]: time="2023-10-02T19:35:02.746474582Z" level=info msg="RemoveContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\"" Oct 2 19:35:02.746588 env[1138]: time="2023-10-02T19:35:02.746564662Z" level=error msg="RemoveContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\" failed" error="failed to set removing state for container \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\": container is already in removing state" Oct 2 19:35:02.747235 kubelet[1440]: E1002 19:35:02.746701 1440 remote_runtime.go:368] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\": container is already in removing state" containerID="fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8" Oct 2 19:35:02.747235 kubelet[1440]: E1002 19:35:02.746735 1440 kuberuntime_container.go:784] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8": container is already in removing state; Skipping pod "cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)" Oct 2 19:35:02.747235 kubelet[1440]: E1002 19:35:02.746799 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:02.747235 kubelet[1440]: E1002 19:35:02.747010 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:35:02.749300 env[1138]: time="2023-10-02T19:35:02.749255557Z" level=info msg="RemoveContainer for \"fd60990cf230c52e51727b5e774f12d2a9a1026c5bbdd336551e536b8acdaae8\" returns successfully" Oct 2 19:35:03.308162 kubelet[1440]: E1002 19:35:03.308107 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:03.349652 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390-rootfs.mount: Deactivated successfully. Oct 2 19:35:04.308574 kubelet[1440]: E1002 19:35:04.308524 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:05.245828 kubelet[1440]: E1002 19:35:05.245783 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:05.309091 kubelet[1440]: E1002 19:35:05.309049 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:05.508821 kubelet[1440]: W1002 19:35:05.508699 1440 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice/cri-containerd-5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390.scope WatchSource:0}: task 5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390 not found: not found Oct 2 19:35:06.309965 kubelet[1440]: E1002 19:35:06.309899 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:07.310729 kubelet[1440]: E1002 19:35:07.310680 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:08.311488 kubelet[1440]: E1002 19:35:08.311426 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:09.311799 kubelet[1440]: E1002 19:35:09.311744 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:10.246675 kubelet[1440]: E1002 19:35:10.246637 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:10.312193 kubelet[1440]: E1002 19:35:10.312141 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:11.312504 kubelet[1440]: E1002 19:35:11.312459 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:12.313516 kubelet[1440]: E1002 19:35:12.313454 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:13.314364 kubelet[1440]: E1002 19:35:13.314306 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:14.315319 kubelet[1440]: E1002 19:35:14.315269 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:14.339964 kubelet[1440]: E1002 19:35:14.339923 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:14.340159 kubelet[1440]: E1002 19:35:14.340144 1440 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-54h4h_kube-system(dd616647-db4c-41b2-b917-d5695fc46f4e)\"" pod="kube-system/cilium-54h4h" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e Oct 2 19:35:14.340240 kubelet[1440]: E1002 19:35:14.340224 1440 dns.go:156] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 1.1.1.1 1.0.0.1 8.8.8.8" Oct 2 19:35:15.116369 kubelet[1440]: E1002 19:35:15.116336 1440 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:15.247947 kubelet[1440]: E1002 19:35:15.247900 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:15.315478 kubelet[1440]: E1002 19:35:15.315444 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:16.316282 kubelet[1440]: E1002 19:35:16.316233 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:17.317191 kubelet[1440]: E1002 19:35:17.317148 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:18.317764 kubelet[1440]: E1002 19:35:18.317721 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:19.318274 kubelet[1440]: E1002 19:35:19.318163 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:20.248822 kubelet[1440]: E1002 19:35:20.248797 1440 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:35:20.319336 kubelet[1440]: E1002 19:35:20.319283 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:20.559371 env[1138]: time="2023-10-02T19:35:20.558743364Z" level=info msg="StopPodSandbox for \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\"" Oct 2 19:35:20.559371 env[1138]: time="2023-10-02T19:35:20.558808524Z" level=info msg="Container to stop \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:20.560052 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6-shm.mount: Deactivated successfully. Oct 2 19:35:20.565000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:35:20.565233 systemd[1]: cri-containerd-0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6.scope: Deactivated successfully. Oct 2 19:35:20.566861 kernel: kauditd_printk_skb: 50 callbacks suppressed Oct 2 19:35:20.566932 kernel: audit: type=1334 audit(1696275320.565:721): prog-id=78 op=UNLOAD Oct 2 19:35:20.570999 env[1138]: time="2023-10-02T19:35:20.570960342Z" level=info msg="StopContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" with timeout 30 (s)" Oct 2 19:35:20.571278 env[1138]: time="2023-10-02T19:35:20.571248663Z" level=info msg="Stop container \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" with signal terminated" Oct 2 19:35:20.572000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:35:20.573406 kernel: audit: type=1334 audit(1696275320.572:722): prog-id=85 op=UNLOAD Oct 2 19:35:20.584490 systemd[1]: cri-containerd-b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6.scope: Deactivated successfully. Oct 2 19:35:20.584000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:35:20.586419 kernel: audit: type=1334 audit(1696275320.584:723): prog-id=86 op=UNLOAD Oct 2 19:35:20.588000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:35:20.589407 kernel: audit: type=1334 audit(1696275320.588:724): prog-id=89 op=UNLOAD Oct 2 19:35:20.591910 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6-rootfs.mount: Deactivated successfully. Oct 2 19:35:20.596117 env[1138]: time="2023-10-02T19:35:20.596063421Z" level=info msg="shim disconnected" id=0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6 Oct 2 19:35:20.596117 env[1138]: time="2023-10-02T19:35:20.596111461Z" level=warning msg="cleaning up after shim disconnected" id=0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6 namespace=k8s.io Oct 2 19:35:20.596117 env[1138]: time="2023-10-02T19:35:20.596121181Z" level=info msg="cleaning up dead shim" Oct 2 19:35:20.605127 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6-rootfs.mount: Deactivated successfully. Oct 2 19:35:20.607458 env[1138]: time="2023-10-02T19:35:20.607413514Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2322 runtime=io.containerd.runc.v2\n" Oct 2 19:35:20.607809 env[1138]: time="2023-10-02T19:35:20.607770436Z" level=info msg="TearDown network for sandbox \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" successfully" Oct 2 19:35:20.607809 env[1138]: time="2023-10-02T19:35:20.607798556Z" level=info msg="StopPodSandbox for \"0a1af1d4a93a5ffb2a59266072dbdda6245139c964de89fee0bc2c6ec3e70da6\" returns successfully" Oct 2 19:35:20.608804 env[1138]: time="2023-10-02T19:35:20.608633280Z" level=info msg="shim disconnected" id=b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6 Oct 2 19:35:20.609256 env[1138]: time="2023-10-02T19:35:20.608794961Z" level=warning msg="cleaning up after shim disconnected" id=b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6 namespace=k8s.io Oct 2 19:35:20.609256 env[1138]: time="2023-10-02T19:35:20.609235163Z" level=info msg="cleaning up dead shim" Oct 2 19:35:20.619009 env[1138]: time="2023-10-02T19:35:20.618970249Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2340 runtime=io.containerd.runc.v2\n" Oct 2 19:35:20.620717 env[1138]: time="2023-10-02T19:35:20.620681097Z" level=info msg="StopContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" returns successfully" Oct 2 19:35:20.621209 env[1138]: time="2023-10-02T19:35:20.621159819Z" level=info msg="StopPodSandbox for \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\"" Oct 2 19:35:20.621263 env[1138]: time="2023-10-02T19:35:20.621236500Z" level=info msg="Container to stop \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:35:20.622309 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d-shm.mount: Deactivated successfully. Oct 2 19:35:20.629971 systemd[1]: cri-containerd-ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d.scope: Deactivated successfully. Oct 2 19:35:20.629000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:35:20.631412 kernel: audit: type=1334 audit(1696275320.629:725): prog-id=79 op=UNLOAD Oct 2 19:35:20.636000 audit: BPF prog-id=83 op=UNLOAD Oct 2 19:35:20.637413 kernel: audit: type=1334 audit(1696275320.636:726): prog-id=83 op=UNLOAD Oct 2 19:35:20.653040 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d-rootfs.mount: Deactivated successfully. Oct 2 19:35:20.655888 env[1138]: time="2023-10-02T19:35:20.655847463Z" level=info msg="shim disconnected" id=ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d Oct 2 19:35:20.656028 env[1138]: time="2023-10-02T19:35:20.656009024Z" level=warning msg="cleaning up after shim disconnected" id=ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d namespace=k8s.io Oct 2 19:35:20.656093 env[1138]: time="2023-10-02T19:35:20.656080344Z" level=info msg="cleaning up dead shim" Oct 2 19:35:20.664713 env[1138]: time="2023-10-02T19:35:20.664678985Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:35:20Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2371 runtime=io.containerd.runc.v2\n" Oct 2 19:35:20.665124 env[1138]: time="2023-10-02T19:35:20.665097027Z" level=info msg="TearDown network for sandbox \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\" successfully" Oct 2 19:35:20.665224 env[1138]: time="2023-10-02T19:35:20.665206307Z" level=info msg="StopPodSandbox for \"ddc487467fb83928eb2ab6ad479ab49e6525ab67aec9a43f722aa26ca8c7981d\" returns successfully" Oct 2 19:35:20.717573 kubelet[1440]: I1002 19:35:20.717511 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-lib-modules\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717573 kubelet[1440]: I1002 19:35:20.717559 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-psvrc\" (UniqueName: \"kubernetes.io/projected/543d79be-ad1f-439b-a3c6-e043a0b4846e-kube-api-access-psvrc\") pod \"543d79be-ad1f-439b-a3c6-e043a0b4846e\" (UID: \"543d79be-ad1f-439b-a3c6-e043a0b4846e\") " Oct 2 19:35:20.717573 kubelet[1440]: I1002 19:35:20.717581 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-clustermesh-secrets\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717579 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717607 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543d79be-ad1f-439b-a3c6-e043a0b4846e-cilium-config-path\") pod \"543d79be-ad1f-439b-a3c6-e043a0b4846e\" (UID: \"543d79be-ad1f-439b-a3c6-e043a0b4846e\") " Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717630 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-net\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717650 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6qpt\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-kube-api-access-l6qpt\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717668 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-xtables-lock\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717801 kubelet[1440]: I1002 19:35:20.717685 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cni-path\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717700 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-hostproc\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717715 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-etc-cni-netd\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717736 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-cgroup\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717756 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-ipsec-secrets\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717773 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-bpf-maps\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.717942 kubelet[1440]: I1002 19:35:20.717792 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-config-path\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.718077 kubelet[1440]: I1002 19:35:20.717809 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-hubble-tls\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.718077 kubelet[1440]: I1002 19:35:20.717825 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-run\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.718077 kubelet[1440]: I1002 19:35:20.717841 1440 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-kernel\") pod \"dd616647-db4c-41b2-b917-d5695fc46f4e\" (UID: \"dd616647-db4c-41b2-b917-d5695fc46f4e\") " Oct 2 19:35:20.718077 kubelet[1440]: I1002 19:35:20.717879 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718077 kubelet[1440]: W1002 19:35:20.718061 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/543d79be-ad1f-439b-a3c6-e043a0b4846e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:20.718194 kubelet[1440]: I1002 19:35:20.718092 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718194 kubelet[1440]: I1002 19:35:20.718124 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718417 kubelet[1440]: I1002 19:35:20.718267 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718417 kubelet[1440]: I1002 19:35:20.718273 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718417 kubelet[1440]: I1002 19:35:20.718306 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-hostproc" (OuterVolumeSpecName: "hostproc") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718417 kubelet[1440]: I1002 19:35:20.718295 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cni-path" (OuterVolumeSpecName: "cni-path") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718417 kubelet[1440]: I1002 19:35:20.718327 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.718927 kubelet[1440]: W1002 19:35:20.718712 1440 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/dd616647-db4c-41b2-b917-d5695fc46f4e/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:35:20.720962 kubelet[1440]: I1002 19:35:20.719093 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:35:20.720962 kubelet[1440]: I1002 19:35:20.719961 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/543d79be-ad1f-439b-a3c6-e043a0b4846e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "543d79be-ad1f-439b-a3c6-e043a0b4846e" (UID: "543d79be-ad1f-439b-a3c6-e043a0b4846e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:20.720962 kubelet[1440]: I1002 19:35:20.720741 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:35:20.721661 kubelet[1440]: I1002 19:35:20.721636 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/543d79be-ad1f-439b-a3c6-e043a0b4846e-kube-api-access-psvrc" (OuterVolumeSpecName: "kube-api-access-psvrc") pod "543d79be-ad1f-439b-a3c6-e043a0b4846e" (UID: "543d79be-ad1f-439b-a3c6-e043a0b4846e"). InnerVolumeSpecName "kube-api-access-psvrc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:20.721882 kubelet[1440]: I1002 19:35:20.721836 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:20.721936 kubelet[1440]: I1002 19:35:20.721927 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:35:20.723426 kubelet[1440]: I1002 19:35:20.723359 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:20.724229 kubelet[1440]: I1002 19:35:20.724202 1440 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-kube-api-access-l6qpt" (OuterVolumeSpecName: "kube-api-access-l6qpt") pod "dd616647-db4c-41b2-b917-d5695fc46f4e" (UID: "dd616647-db4c-41b2-b917-d5695fc46f4e"). InnerVolumeSpecName "kube-api-access-l6qpt". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:35:20.776410 kubelet[1440]: I1002 19:35:20.776367 1440 scope.go:115] "RemoveContainer" containerID="5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390" Oct 2 19:35:20.778435 env[1138]: time="2023-10-02T19:35:20.778373963Z" level=info msg="RemoveContainer for \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\"" Oct 2 19:35:20.780292 systemd[1]: Removed slice kubepods-burstable-poddd616647_db4c_41b2_b917_d5695fc46f4e.slice. Oct 2 19:35:20.782619 systemd[1]: Removed slice kubepods-besteffort-pod543d79be_ad1f_439b_a3c6_e043a0b4846e.slice. Oct 2 19:35:20.783369 env[1138]: time="2023-10-02T19:35:20.783317346Z" level=info msg="RemoveContainer for \"5d8f5838b683979b1e57c43513122d7b2db302f4f90788dc6f4c02f4be5ce390\" returns successfully" Oct 2 19:35:20.783611 kubelet[1440]: I1002 19:35:20.783590 1440 scope.go:115] "RemoveContainer" containerID="b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6" Oct 2 19:35:20.784624 env[1138]: time="2023-10-02T19:35:20.784584032Z" level=info msg="RemoveContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\"" Oct 2 19:35:20.794835 env[1138]: time="2023-10-02T19:35:20.794787320Z" level=info msg="RemoveContainer for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" returns successfully" Oct 2 19:35:20.795063 kubelet[1440]: I1002 19:35:20.795011 1440 scope.go:115] "RemoveContainer" containerID="b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6" Oct 2 19:35:20.795269 env[1138]: time="2023-10-02T19:35:20.795202602Z" level=error msg="ContainerStatus for \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\": not found" Oct 2 19:35:20.795406 kubelet[1440]: E1002 19:35:20.795372 1440 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\": not found" containerID="b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6" Oct 2 19:35:20.795456 kubelet[1440]: I1002 19:35:20.795440 1440 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6} err="failed to get container status \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\": rpc error: code = NotFound desc = an error occurred when try to find container \"b9039f2539e4176dbcfa67b8c42e1f7266311fb48252a5498f33096b88ebbbd6\": not found" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.818942 1440 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cni-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.818979 1440 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-hostproc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.818991 1440 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-etc-cni-netd\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.819001 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-run\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.819012 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-kernel\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.819021 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-cgroup\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.819037 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-ipsec-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819049 kubelet[1440]: I1002 19:35:20.819046 1440 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-bpf-maps\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819055 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dd616647-db4c-41b2-b917-d5695fc46f4e-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819063 1440 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-hubble-tls\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819072 1440 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-lib-modules\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819081 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-psvrc\" (UniqueName: \"kubernetes.io/projected/543d79be-ad1f-439b-a3c6-e043a0b4846e-kube-api-access-psvrc\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819090 1440 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dd616647-db4c-41b2-b917-d5695fc46f4e-clustermesh-secrets\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819098 1440 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/543d79be-ad1f-439b-a3c6-e043a0b4846e-cilium-config-path\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819113 1440 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-host-proc-sys-net\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819316 kubelet[1440]: I1002 19:35:20.819122 1440 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-l6qpt\" (UniqueName: \"kubernetes.io/projected/dd616647-db4c-41b2-b917-d5695fc46f4e-kube-api-access-l6qpt\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:20.819538 kubelet[1440]: I1002 19:35:20.819131 1440 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dd616647-db4c-41b2-b917-d5695fc46f4e-xtables-lock\") on node \"10.0.0.12\" DevicePath \"\"" Oct 2 19:35:21.320015 kubelet[1440]: E1002 19:35:21.319968 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:35:21.342861 kubelet[1440]: I1002 19:35:21.342829 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=543d79be-ad1f-439b-a3c6-e043a0b4846e path="/var/lib/kubelet/pods/543d79be-ad1f-439b-a3c6-e043a0b4846e/volumes" Oct 2 19:35:21.343224 kubelet[1440]: I1002 19:35:21.343198 1440 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=dd616647-db4c-41b2-b917-d5695fc46f4e path="/var/lib/kubelet/pods/dd616647-db4c-41b2-b917-d5695fc46f4e/volumes" Oct 2 19:35:21.560027 systemd[1]: var-lib-kubelet-pods-543d79be\x2dad1f\x2d439b\x2da3c6\x2de043a0b4846e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dpsvrc.mount: Deactivated successfully. Oct 2 19:35:21.560128 systemd[1]: var-lib-kubelet-pods-dd616647\x2ddb4c\x2d41b2\x2db917\x2dd5695fc46f4e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dl6qpt.mount: Deactivated successfully. Oct 2 19:35:21.560190 systemd[1]: var-lib-kubelet-pods-dd616647\x2ddb4c\x2d41b2\x2db917\x2dd5695fc46f4e-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:35:21.560245 systemd[1]: var-lib-kubelet-pods-dd616647\x2ddb4c\x2d41b2\x2db917\x2dd5695fc46f4e-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:21.560299 systemd[1]: var-lib-kubelet-pods-dd616647\x2ddb4c\x2d41b2\x2db917\x2dd5695fc46f4e-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:35:22.320435 kubelet[1440]: E1002 19:35:22.320369 1440 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"