Oct 2 20:45:00.109902 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 20:45:00.109940 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:45:00.109958 kernel: BIOS-provided physical RAM map: Oct 2 20:45:00.109970 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 2 20:45:00.109991 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 2 20:45:00.110004 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 2 20:45:00.110024 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 2 20:45:00.110044 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 2 20:45:00.110057 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Oct 2 20:45:00.110070 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Oct 2 20:45:00.110083 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 2 20:45:00.110096 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 2 20:45:00.110110 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 2 20:45:00.110123 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 2 20:45:00.110144 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 2 20:45:00.110159 kernel: NX (Execute Disable) protection: active Oct 2 20:45:00.110173 kernel: efi: EFI v2.70 by EDK II Oct 2 20:45:00.110189 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe386218 RNG=0xbfb73018 TPMEventLog=0xbe2c8018 Oct 2 20:45:00.110203 kernel: random: crng init done Oct 2 20:45:00.110217 kernel: SMBIOS 2.4 present. Oct 2 20:45:00.110250 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/04/2023 Oct 2 20:45:00.110265 kernel: Hypervisor detected: KVM Oct 2 20:45:00.110283 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 20:45:00.110298 kernel: kvm-clock: cpu 0, msr 180f8a001, primary cpu clock Oct 2 20:45:00.110312 kernel: kvm-clock: using sched offset of 13040060683 cycles Oct 2 20:45:00.110328 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 20:45:00.110343 kernel: tsc: Detected 2299.998 MHz processor Oct 2 20:45:00.110365 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 20:45:00.110380 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 20:45:00.110395 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 2 20:45:00.110410 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 20:45:00.110425 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 2 20:45:00.110443 kernel: Using GB pages for direct mapping Oct 2 20:45:00.110465 kernel: Secure boot disabled Oct 2 20:45:00.110480 kernel: ACPI: Early table checksum verification disabled Oct 2 20:45:00.110494 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 2 20:45:00.110509 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 2 20:45:00.110524 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 2 20:45:00.110539 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 2 20:45:00.110554 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 2 20:45:00.110585 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Oct 2 20:45:00.110601 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 2 20:45:00.110616 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 2 20:45:00.110632 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 2 20:45:00.110648 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 2 20:45:00.110663 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 2 20:45:00.110683 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 2 20:45:00.110699 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 2 20:45:00.110715 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 2 20:45:00.110746 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 2 20:45:00.110759 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 2 20:45:00.110772 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 2 20:45:00.110784 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 2 20:45:00.110798 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 2 20:45:00.110811 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 2 20:45:00.110829 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 20:45:00.110981 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 20:45:00.110996 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 2 20:45:00.111013 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 2 20:45:00.111027 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 2 20:45:00.111047 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Oct 2 20:45:00.111061 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Oct 2 20:45:00.111077 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Oct 2 20:45:00.111092 kernel: Zone ranges: Oct 2 20:45:00.111245 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 20:45:00.111260 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 20:45:00.111275 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 2 20:45:00.111289 kernel: Movable zone start for each node Oct 2 20:45:00.111303 kernel: Early memory node ranges Oct 2 20:45:00.111318 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 2 20:45:00.111465 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 2 20:45:00.111481 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Oct 2 20:45:00.111495 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 2 20:45:00.111515 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 2 20:45:00.111530 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 2 20:45:00.111545 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 20:45:00.111561 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 2 20:45:00.111694 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 2 20:45:00.111712 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 2 20:45:00.111759 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 2 20:45:00.111774 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 20:45:00.111790 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 20:45:00.111810 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 20:45:00.111825 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 20:45:00.111839 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 20:45:00.111855 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 20:45:00.111871 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 20:45:00.111887 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 20:45:00.111904 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 20:45:00.111920 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 2 20:45:00.111937 kernel: Booting paravirtualized kernel on KVM Oct 2 20:45:00.111957 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 20:45:00.111974 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 20:45:00.111990 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 20:45:00.112006 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 20:45:00.112022 kernel: pcpu-alloc: [0] 0 1 Oct 2 20:45:00.112038 kernel: kvm-guest: PV spinlocks enabled Oct 2 20:45:00.112055 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 20:45:00.112071 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Oct 2 20:45:00.112087 kernel: Policy zone: Normal Oct 2 20:45:00.112110 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:45:00.112125 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 20:45:00.112139 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 2 20:45:00.112154 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 20:45:00.112170 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 20:45:00.112187 kernel: Memory: 7536584K/7860584K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 323740K reserved, 0K cma-reserved) Oct 2 20:45:00.112202 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 20:45:00.112217 kernel: Kernel/User page tables isolation: enabled Oct 2 20:45:00.112236 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 20:45:00.112250 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 20:45:00.112266 kernel: rcu: Hierarchical RCU implementation. Oct 2 20:45:00.112282 kernel: rcu: RCU event tracing is enabled. Oct 2 20:45:00.112297 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 20:45:00.112312 kernel: Rude variant of Tasks RCU enabled. Oct 2 20:45:00.112326 kernel: Tracing variant of Tasks RCU enabled. Oct 2 20:45:00.112342 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 20:45:00.112366 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 20:45:00.112388 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 20:45:00.112419 kernel: Console: colour dummy device 80x25 Oct 2 20:45:00.112436 kernel: printk: console [ttyS0] enabled Oct 2 20:45:00.112457 kernel: ACPI: Core revision 20210730 Oct 2 20:45:00.112474 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 20:45:00.112490 kernel: x2apic enabled Oct 2 20:45:00.112507 kernel: Switched APIC routing to physical x2apic. Oct 2 20:45:00.112525 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 2 20:45:00.112542 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 20:45:00.112560 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 2 20:45:00.112581 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 2 20:45:00.112597 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 2 20:45:00.112613 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 20:45:00.112629 kernel: Spectre V2 : Mitigation: IBRS Oct 2 20:45:00.112645 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 20:45:00.112661 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 20:45:00.112681 kernel: RETBleed: Mitigation: IBRS Oct 2 20:45:00.112698 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 20:45:00.112715 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Oct 2 20:45:00.112750 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 20:45:00.112766 kernel: MDS: Mitigation: Clear CPU buffers Oct 2 20:45:00.112783 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 20:45:00.112801 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 20:45:00.112817 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 20:45:00.112833 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 20:45:00.112855 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 20:45:00.112872 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 20:45:00.112888 kernel: Freeing SMP alternatives memory: 32K Oct 2 20:45:00.112903 kernel: pid_max: default: 32768 minimum: 301 Oct 2 20:45:00.112939 kernel: LSM: Security Framework initializing Oct 2 20:45:00.112957 kernel: SELinux: Initializing. Oct 2 20:45:00.112974 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 20:45:00.112992 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 20:45:00.113010 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 2 20:45:00.113032 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 2 20:45:00.113050 kernel: signal: max sigframe size: 1776 Oct 2 20:45:00.113068 kernel: rcu: Hierarchical SRCU implementation. Oct 2 20:45:00.113085 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 20:45:00.113106 kernel: smp: Bringing up secondary CPUs ... Oct 2 20:45:00.113122 kernel: x86: Booting SMP configuration: Oct 2 20:45:00.113139 kernel: .... node #0, CPUs: #1 Oct 2 20:45:00.113157 kernel: kvm-clock: cpu 1, msr 180f8a041, secondary cpu clock Oct 2 20:45:00.113175 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 2 20:45:00.113198 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 20:45:00.113216 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 20:45:00.113233 kernel: smpboot: Max logical packages: 1 Oct 2 20:45:00.113251 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 2 20:45:00.113269 kernel: devtmpfs: initialized Oct 2 20:45:00.113295 kernel: x86/mm: Memory block size: 128MB Oct 2 20:45:00.113312 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 2 20:45:00.113330 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 20:45:00.113354 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 20:45:00.113376 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 20:45:00.113394 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 20:45:00.113412 kernel: audit: initializing netlink subsys (disabled) Oct 2 20:45:00.113429 kernel: audit: type=2000 audit(1696279499.142:1): state=initialized audit_enabled=0 res=1 Oct 2 20:45:00.113446 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 20:45:00.113461 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 20:45:00.113477 kernel: cpuidle: using governor menu Oct 2 20:45:00.113493 kernel: ACPI: bus type PCI registered Oct 2 20:45:00.113510 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 20:45:00.113532 kernel: dca service started, version 1.12.1 Oct 2 20:45:00.113550 kernel: PCI: Using configuration type 1 for base access Oct 2 20:45:00.113568 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 20:45:00.113586 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 20:45:00.113604 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 20:45:00.113621 kernel: ACPI: Added _OSI(Module Device) Oct 2 20:45:00.113638 kernel: ACPI: Added _OSI(Processor Device) Oct 2 20:45:00.113655 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 20:45:00.113672 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 20:45:00.113693 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 20:45:00.113711 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 20:45:00.113741 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 20:45:00.113759 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 2 20:45:00.113776 kernel: ACPI: Interpreter enabled Oct 2 20:45:00.113794 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 20:45:00.113811 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 20:45:00.113827 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 20:45:00.113844 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 2 20:45:00.113865 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 20:45:00.114111 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 20:45:00.114287 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 20:45:00.114310 kernel: PCI host bridge to bus 0000:00 Oct 2 20:45:00.114487 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 20:45:00.114635 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 20:45:00.114818 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 20:45:00.114966 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 2 20:45:00.115110 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 20:45:00.115297 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 20:45:00.115478 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Oct 2 20:45:00.115671 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 20:45:00.115851 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 20:45:00.116039 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Oct 2 20:45:00.116201 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Oct 2 20:45:00.116381 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Oct 2 20:45:00.116566 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 2 20:45:00.116773 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Oct 2 20:45:00.116950 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Oct 2 20:45:00.117119 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 20:45:00.117278 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 20:45:00.117466 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Oct 2 20:45:00.117488 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 20:45:00.117504 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 20:45:00.117519 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 20:45:00.117536 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 20:45:00.117550 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 20:45:00.117571 kernel: iommu: Default domain type: Translated Oct 2 20:45:00.117586 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 20:45:00.117608 kernel: vgaarb: loaded Oct 2 20:45:00.117623 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 20:45:00.117644 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 20:45:00.117665 kernel: PTP clock support registered Oct 2 20:45:00.117680 kernel: Registered efivars operations Oct 2 20:45:00.117696 kernel: PCI: Using ACPI for IRQ routing Oct 2 20:45:00.117711 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 20:45:00.117747 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 2 20:45:00.117764 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 2 20:45:00.117779 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 2 20:45:00.117793 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 2 20:45:00.125773 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 20:45:00.125814 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 20:45:00.125833 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 20:45:00.125851 kernel: pnp: PnP ACPI init Oct 2 20:45:00.125868 kernel: pnp: PnP ACPI: found 7 devices Oct 2 20:45:00.125892 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 20:45:00.125910 kernel: NET: Registered PF_INET protocol family Oct 2 20:45:00.125928 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 20:45:00.125945 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 2 20:45:00.125962 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 20:45:00.125979 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 20:45:00.125996 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 20:45:00.126013 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 2 20:45:00.126030 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 20:45:00.126051 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 20:45:00.126068 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 20:45:00.126086 kernel: NET: Registered PF_XDP protocol family Oct 2 20:45:00.126274 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 20:45:00.126420 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 20:45:00.126559 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 20:45:00.126695 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 2 20:45:00.126900 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 20:45:00.126929 kernel: PCI: CLS 0 bytes, default 64 Oct 2 20:45:00.126947 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 20:45:00.126964 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Oct 2 20:45:00.126981 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 20:45:00.126999 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 20:45:00.127016 kernel: clocksource: Switched to clocksource tsc Oct 2 20:45:00.127033 kernel: Initialise system trusted keyrings Oct 2 20:45:00.127050 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 2 20:45:00.127071 kernel: Key type asymmetric registered Oct 2 20:45:00.127087 kernel: Asymmetric key parser 'x509' registered Oct 2 20:45:00.127104 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 20:45:00.127121 kernel: io scheduler mq-deadline registered Oct 2 20:45:00.127139 kernel: io scheduler kyber registered Oct 2 20:45:00.127155 kernel: io scheduler bfq registered Oct 2 20:45:00.127172 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 20:45:00.127190 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 20:45:00.127364 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 2 20:45:00.127390 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 20:45:00.127546 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 2 20:45:00.127567 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 20:45:00.127735 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 2 20:45:00.127757 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 20:45:00.127774 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 20:45:00.127798 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 20:45:00.127815 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 2 20:45:00.127832 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 2 20:45:00.128005 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 2 20:45:00.128030 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 20:45:00.128045 kernel: i8042: Warning: Keylock active Oct 2 20:45:00.128061 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 20:45:00.128077 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 20:45:00.128235 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 2 20:45:00.128378 kernel: rtc_cmos 00:00: registered as rtc0 Oct 2 20:45:00.128523 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T20:44:59 UTC (1696279499) Oct 2 20:45:00.128656 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 2 20:45:00.128675 kernel: intel_pstate: CPU model not supported Oct 2 20:45:00.128692 kernel: pstore: Registered efi as persistent store backend Oct 2 20:45:00.128708 kernel: NET: Registered PF_INET6 protocol family Oct 2 20:45:00.129109 kernel: Segment Routing with IPv6 Oct 2 20:45:00.129136 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 20:45:00.129153 kernel: NET: Registered PF_PACKET protocol family Oct 2 20:45:00.129170 kernel: Key type dns_resolver registered Oct 2 20:45:00.129193 kernel: IPI shorthand broadcast: enabled Oct 2 20:45:00.129211 kernel: sched_clock: Marking stable (732818969, 139940586)->(903870213, -31110658) Oct 2 20:45:00.129229 kernel: registered taskstats version 1 Oct 2 20:45:00.129247 kernel: Loading compiled-in X.509 certificates Oct 2 20:45:00.129265 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 20:45:00.129282 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 20:45:00.129298 kernel: Key type .fscrypt registered Oct 2 20:45:00.129316 kernel: Key type fscrypt-provisioning registered Oct 2 20:45:00.129333 kernel: pstore: Using crash dump compression: deflate Oct 2 20:45:00.129354 kernel: ima: Allocated hash algorithm: sha1 Oct 2 20:45:00.129371 kernel: ima: No architecture policies found Oct 2 20:45:00.129389 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 20:45:00.129407 kernel: Write protecting the kernel read-only data: 28672k Oct 2 20:45:00.129424 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 20:45:00.129441 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 20:45:00.129458 kernel: Run /init as init process Oct 2 20:45:00.129475 kernel: with arguments: Oct 2 20:45:00.129495 kernel: /init Oct 2 20:45:00.129511 kernel: with environment: Oct 2 20:45:00.129529 kernel: HOME=/ Oct 2 20:45:00.129546 kernel: TERM=linux Oct 2 20:45:00.129572 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 20:45:00.129595 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:45:00.129617 systemd[1]: Detected virtualization kvm. Oct 2 20:45:00.129635 systemd[1]: Detected architecture x86-64. Oct 2 20:45:00.129656 systemd[1]: Running in initrd. Oct 2 20:45:00.129673 systemd[1]: No hostname configured, using default hostname. Oct 2 20:45:00.129691 systemd[1]: Hostname set to . Oct 2 20:45:00.129710 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:45:00.129745 systemd[1]: Queued start job for default target initrd.target. Oct 2 20:45:00.129764 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:45:00.129790 systemd[1]: Reached target cryptsetup.target. Oct 2 20:45:00.129807 systemd[1]: Reached target paths.target. Oct 2 20:45:00.129828 systemd[1]: Reached target slices.target. Oct 2 20:45:00.129846 systemd[1]: Reached target swap.target. Oct 2 20:45:00.129865 systemd[1]: Reached target timers.target. Oct 2 20:45:00.129884 systemd[1]: Listening on iscsid.socket. Oct 2 20:45:00.129903 systemd[1]: Listening on iscsiuio.socket. Oct 2 20:45:00.129921 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 20:45:00.129938 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 20:45:00.129960 systemd[1]: Listening on systemd-journald.socket. Oct 2 20:45:00.129978 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:45:00.129996 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:45:00.130014 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:45:00.130032 systemd[1]: Reached target sockets.target. Oct 2 20:45:00.130050 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:45:00.130068 systemd[1]: Finished network-cleanup.service. Oct 2 20:45:00.130086 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 20:45:00.130103 systemd[1]: Starting systemd-journald.service... Oct 2 20:45:00.130123 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:45:00.130141 systemd[1]: Starting systemd-resolved.service... Oct 2 20:45:00.130159 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 20:45:00.130209 systemd-journald[189]: Journal started Oct 2 20:45:00.130312 systemd-journald[189]: Runtime Journal (/run/log/journal/e9c9bbac379ff0533953ac7b36188581) is 8.0M, max 148.8M, 140.8M free. Oct 2 20:45:00.133755 systemd[1]: Started systemd-journald.service. Oct 2 20:45:00.136234 systemd-modules-load[190]: Inserted module 'overlay' Oct 2 20:45:00.144945 kernel: audit: type=1130 audit(1696279500.138:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.145095 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:45:00.154890 kernel: audit: type=1130 audit(1696279500.145:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.151436 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 20:45:00.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.163121 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 20:45:00.203239 kernel: audit: type=1130 audit(1696279500.161:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.203284 kernel: audit: type=1130 audit(1696279500.170:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.203309 kernel: audit: type=1130 audit(1696279500.189:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.203332 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 20:45:00.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.173267 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 20:45:00.176929 systemd-resolved[191]: Positive Trust Anchors: Oct 2 20:45:00.176941 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:45:00.177002 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:45:00.182019 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 20:45:00.183531 systemd-resolved[191]: Defaulting to hostname 'linux'. Oct 2 20:45:00.187101 systemd[1]: Started systemd-resolved.service. Oct 2 20:45:00.190985 systemd[1]: Reached target nss-lookup.target. Oct 2 20:45:00.199150 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 20:45:00.201000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.208749 kernel: audit: type=1130 audit(1696279500.201:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.208790 kernel: Bridge firewalling registered Oct 2 20:45:00.209594 systemd-modules-load[190]: Inserted module 'br_netfilter' Oct 2 20:45:00.227681 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 20:45:00.248890 kernel: audit: type=1130 audit(1696279500.230:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.248931 kernel: SCSI subsystem initialized Oct 2 20:45:00.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.233244 systemd[1]: Starting dracut-cmdline.service... Oct 2 20:45:00.257615 dracut-cmdline[206]: dracut-dracut-053 Oct 2 20:45:00.263130 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 20:45:00.263185 kernel: device-mapper: uevent: version 1.0.3 Oct 2 20:45:00.265654 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 20:45:00.265707 dracut-cmdline[206]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 20:45:00.270909 systemd-modules-load[190]: Inserted module 'dm_multipath' Oct 2 20:45:00.271881 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:45:00.287841 kernel: audit: type=1130 audit(1696279500.281:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.286712 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:45:00.298600 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:45:00.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.308785 kernel: audit: type=1130 audit(1696279500.300:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.355758 kernel: Loading iSCSI transport class v2.0-870. Oct 2 20:45:00.369767 kernel: iscsi: registered transport (tcp) Oct 2 20:45:00.393768 kernel: iscsi: registered transport (qla4xxx) Oct 2 20:45:00.393865 kernel: QLogic iSCSI HBA Driver Oct 2 20:45:00.438599 systemd[1]: Finished dracut-cmdline.service. Oct 2 20:45:00.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.440224 systemd[1]: Starting dracut-pre-udev.service... Oct 2 20:45:00.497808 kernel: raid6: avx2x4 gen() 18147 MB/s Oct 2 20:45:00.514774 kernel: raid6: avx2x4 xor() 8225 MB/s Oct 2 20:45:00.531766 kernel: raid6: avx2x2 gen() 18112 MB/s Oct 2 20:45:00.548757 kernel: raid6: avx2x2 xor() 18588 MB/s Oct 2 20:45:00.565767 kernel: raid6: avx2x1 gen() 13962 MB/s Oct 2 20:45:00.582766 kernel: raid6: avx2x1 xor() 16152 MB/s Oct 2 20:45:00.599767 kernel: raid6: sse2x4 gen() 11040 MB/s Oct 2 20:45:00.616766 kernel: raid6: sse2x4 xor() 6695 MB/s Oct 2 20:45:00.633767 kernel: raid6: sse2x2 gen() 12013 MB/s Oct 2 20:45:00.650764 kernel: raid6: sse2x2 xor() 7456 MB/s Oct 2 20:45:00.667766 kernel: raid6: sse2x1 gen() 10507 MB/s Oct 2 20:45:00.685363 kernel: raid6: sse2x1 xor() 5172 MB/s Oct 2 20:45:00.685406 kernel: raid6: using algorithm avx2x4 gen() 18147 MB/s Oct 2 20:45:00.685427 kernel: raid6: .... xor() 8225 MB/s, rmw enabled Oct 2 20:45:00.686165 kernel: raid6: using avx2x2 recovery algorithm Oct 2 20:45:00.701766 kernel: xor: automatically using best checksumming function avx Oct 2 20:45:00.805763 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 20:45:00.817571 systemd[1]: Finished dracut-pre-udev.service. Oct 2 20:45:00.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.816000 audit: BPF prog-id=7 op=LOAD Oct 2 20:45:00.816000 audit: BPF prog-id=8 op=LOAD Oct 2 20:45:00.819290 systemd[1]: Starting systemd-udevd.service... Oct 2 20:45:00.836563 systemd-udevd[388]: Using default interface naming scheme 'v252'. Oct 2 20:45:00.843955 systemd[1]: Started systemd-udevd.service. Oct 2 20:45:00.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.847749 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 20:45:00.868197 dracut-pre-trigger[393]: rd.md=0: removing MD RAID activation Oct 2 20:45:00.906689 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 20:45:00.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:00.908012 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:45:00.972508 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:45:00.974000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:01.043753 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 20:45:01.095465 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 20:45:01.095598 kernel: AES CTR mode by8 optimization enabled Oct 2 20:45:01.095922 kernel: scsi host0: Virtio SCSI HBA Oct 2 20:45:01.127786 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 2 20:45:01.181467 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Oct 2 20:45:01.181691 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 2 20:45:01.184092 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 2 20:45:01.184288 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 2 20:45:01.184424 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 20:45:01.190918 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 20:45:01.190978 kernel: GPT:17805311 != 25165823 Oct 2 20:45:01.191001 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 20:45:01.192033 kernel: GPT:17805311 != 25165823 Oct 2 20:45:01.192769 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 20:45:01.194205 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:45:01.196747 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 2 20:45:01.242751 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (437) Oct 2 20:45:01.264312 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 20:45:01.286839 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 20:45:01.287074 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 20:45:01.329557 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 20:45:01.334666 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:45:01.347100 systemd[1]: Starting disk-uuid.service... Oct 2 20:45:01.378882 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:45:01.379057 disk-uuid[504]: Primary Header is updated. Oct 2 20:45:01.379057 disk-uuid[504]: Secondary Entries is updated. Oct 2 20:45:01.379057 disk-uuid[504]: Secondary Header is updated. Oct 2 20:45:01.409847 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:45:01.409885 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:45:02.404076 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 20:45:02.404155 disk-uuid[505]: The operation has completed successfully. Oct 2 20:45:02.477353 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 20:45:02.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.477503 systemd[1]: Finished disk-uuid.service. Oct 2 20:45:02.488649 systemd[1]: Starting verity-setup.service... Oct 2 20:45:02.515773 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 20:45:02.596121 systemd[1]: Found device dev-mapper-usr.device. Oct 2 20:45:02.605201 systemd[1]: Finished verity-setup.service. Oct 2 20:45:02.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.621163 systemd[1]: Mounting sysusr-usr.mount... Oct 2 20:45:02.724770 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:45:02.725314 systemd[1]: Mounted sysusr-usr.mount. Oct 2 20:45:02.733094 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 20:45:02.734082 systemd[1]: Starting ignition-setup.service... Oct 2 20:45:02.787921 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:45:02.787966 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:45:02.787990 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:45:02.788011 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 20:45:02.770645 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 20:45:02.807093 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 20:45:02.822751 systemd[1]: Finished ignition-setup.service. Oct 2 20:45:02.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.833208 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 20:45:02.897188 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 20:45:02.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.896000 audit: BPF prog-id=9 op=LOAD Oct 2 20:45:02.899738 systemd[1]: Starting systemd-networkd.service... Oct 2 20:45:02.933491 systemd-networkd[679]: lo: Link UP Oct 2 20:45:02.933506 systemd-networkd[679]: lo: Gained carrier Oct 2 20:45:02.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.934327 systemd-networkd[679]: Enumeration completed Oct 2 20:45:02.934686 systemd-networkd[679]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:45:02.936976 systemd[1]: Started systemd-networkd.service. Oct 2 20:45:02.937259 systemd-networkd[679]: eth0: Link UP Oct 2 20:45:02.937268 systemd-networkd[679]: eth0: Gained carrier Oct 2 20:45:03.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.949223 systemd[1]: Reached target network.target. Oct 2 20:45:03.031045 iscsid[688]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:45:03.031045 iscsid[688]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 20:45:03.031045 iscsid[688]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 20:45:03.031045 iscsid[688]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 20:45:03.031045 iscsid[688]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 20:45:03.031045 iscsid[688]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 20:45:03.031045 iscsid[688]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 20:45:03.036000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.949887 systemd-networkd[679]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 20:45:03.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.178915 ignition[617]: Ignition 2.14.0 Oct 2 20:45:03.212000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:02.965033 systemd[1]: Starting iscsiuio.service... Oct 2 20:45:03.178929 ignition[617]: Stage: fetch-offline Oct 2 20:45:02.996187 systemd[1]: Started iscsiuio.service. Oct 2 20:45:03.179008 ignition[617]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:03.004668 systemd[1]: Starting iscsid.service... Oct 2 20:45:03.179048 ignition[617]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:03.024186 systemd[1]: Started iscsid.service. Oct 2 20:45:03.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.202510 ignition[617]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:03.039419 systemd[1]: Starting dracut-initqueue.service... Oct 2 20:45:03.202779 ignition[617]: parsed url from cmdline: "" Oct 2 20:45:03.058547 systemd[1]: Finished dracut-initqueue.service. Oct 2 20:45:03.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.202784 ignition[617]: no config URL provided Oct 2 20:45:03.122063 systemd[1]: Reached target remote-fs-pre.target. Oct 2 20:45:03.202791 ignition[617]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:45:03.130909 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:45:03.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.202803 ignition[617]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:45:03.148942 systemd[1]: Reached target remote-fs.target. Oct 2 20:45:03.202812 ignition[617]: failed to fetch config: resource requires networking Oct 2 20:45:03.150098 systemd[1]: Starting dracut-pre-mount.service... Oct 2 20:45:03.202973 ignition[617]: Ignition finished successfully Oct 2 20:45:03.167396 systemd[1]: Finished dracut-pre-mount.service. Oct 2 20:45:03.227379 ignition[704]: Ignition 2.14.0 Oct 2 20:45:03.204178 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 20:45:03.227388 ignition[704]: Stage: fetch Oct 2 20:45:03.215272 systemd[1]: Starting ignition-fetch.service... Oct 2 20:45:03.228003 ignition[704]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:03.258346 unknown[704]: fetched base config from "system" Oct 2 20:45:03.228034 ignition[704]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:03.258359 unknown[704]: fetched base config from "system" Oct 2 20:45:03.235051 ignition[704]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:03.258369 unknown[704]: fetched user config from "gcp" Oct 2 20:45:03.235241 ignition[704]: parsed url from cmdline: "" Oct 2 20:45:03.260927 systemd[1]: Finished ignition-fetch.service. Oct 2 20:45:03.235247 ignition[704]: no config URL provided Oct 2 20:45:03.274409 systemd[1]: Starting ignition-kargs.service... Oct 2 20:45:03.235254 ignition[704]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 20:45:03.297055 systemd[1]: Finished ignition-kargs.service. Oct 2 20:45:03.235265 ignition[704]: no config at "/usr/lib/ignition/user.ign" Oct 2 20:45:03.314455 systemd[1]: Starting ignition-disks.service... Oct 2 20:45:03.235305 ignition[704]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 2 20:45:03.336368 systemd[1]: Finished ignition-disks.service. Oct 2 20:45:03.239633 ignition[704]: GET result: OK Oct 2 20:45:03.359344 systemd[1]: Reached target initrd-root-device.target. Oct 2 20:45:03.239768 ignition[704]: parsing config with SHA512: 74727a4c82315f6e7d3b3b708a3d53817316d2b80f6c8eb9e832946b7a6c20904195de324bed22db64b3ba05f8e3ca9f5847889d61b7b458dbc603b385efbeda Oct 2 20:45:03.374929 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:45:03.259037 ignition[704]: fetch: fetch complete Oct 2 20:45:03.388953 systemd[1]: Reached target local-fs.target. Oct 2 20:45:03.259044 ignition[704]: fetch: fetch passed Oct 2 20:45:03.401926 systemd[1]: Reached target sysinit.target. Oct 2 20:45:03.259092 ignition[704]: Ignition finished successfully Oct 2 20:45:03.416918 systemd[1]: Reached target basic.target. Oct 2 20:45:03.286931 ignition[710]: Ignition 2.14.0 Oct 2 20:45:03.431305 systemd[1]: Starting systemd-fsck-root.service... Oct 2 20:45:03.286941 ignition[710]: Stage: kargs Oct 2 20:45:03.287070 ignition[710]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:03.287106 ignition[710]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:03.294460 ignition[710]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:03.295756 ignition[710]: kargs: kargs passed Oct 2 20:45:03.295820 ignition[710]: Ignition finished successfully Oct 2 20:45:03.325219 ignition[716]: Ignition 2.14.0 Oct 2 20:45:03.325229 ignition[716]: Stage: disks Oct 2 20:45:03.325366 ignition[716]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:03.325411 ignition[716]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:03.333003 ignition[716]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:03.334416 ignition[716]: disks: disks passed Oct 2 20:45:03.334475 ignition[716]: Ignition finished successfully Oct 2 20:45:03.466787 systemd-fsck[724]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 20:45:03.685706 systemd[1]: Finished systemd-fsck-root.service. Oct 2 20:45:03.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.694953 systemd[1]: Mounting sysroot.mount... Oct 2 20:45:03.726904 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:45:03.723236 systemd[1]: Mounted sysroot.mount. Oct 2 20:45:03.734202 systemd[1]: Reached target initrd-root-fs.target. Oct 2 20:45:03.752582 systemd[1]: Mounting sysroot-usr.mount... Oct 2 20:45:03.764522 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 20:45:03.764578 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 20:45:03.764617 systemd[1]: Reached target ignition-diskful.target. Oct 2 20:45:03.850937 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (730) Oct 2 20:45:03.851002 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:45:03.851025 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:45:03.851048 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:45:03.780289 systemd[1]: Mounted sysroot-usr.mount. Oct 2 20:45:03.805108 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:45:03.882889 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 20:45:03.859105 systemd[1]: Starting initrd-setup-root.service... Oct 2 20:45:03.874233 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:45:03.905651 initrd-setup-root[753]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 20:45:03.915861 initrd-setup-root[761]: cut: /sysroot/etc/group: No such file or directory Oct 2 20:45:03.925872 initrd-setup-root[769]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 20:45:03.935912 initrd-setup-root[777]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 20:45:03.962007 systemd[1]: Finished initrd-setup-root.service. Oct 2 20:45:03.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:03.963276 systemd[1]: Starting ignition-mount.service... Oct 2 20:45:03.990841 systemd[1]: Starting sysroot-boot.service... Oct 2 20:45:04.002155 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 20:45:04.002433 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 20:45:04.027869 ignition[795]: INFO : Ignition 2.14.0 Oct 2 20:45:04.027869 ignition[795]: INFO : Stage: mount Oct 2 20:45:04.027869 ignition[795]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:04.027869 ignition[795]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:04.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:04.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:04.034830 systemd[1]: Finished sysroot-boot.service. Oct 2 20:45:04.096939 ignition[795]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:04.096939 ignition[795]: INFO : mount: mount passed Oct 2 20:45:04.096939 ignition[795]: INFO : Ignition finished successfully Oct 2 20:45:04.162920 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (806) Oct 2 20:45:04.162966 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 20:45:04.162990 kernel: BTRFS info (device sda6): using free space tree Oct 2 20:45:04.163010 kernel: BTRFS info (device sda6): has skinny extents Oct 2 20:45:04.163031 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 20:45:04.041304 systemd[1]: Finished ignition-mount.service. Oct 2 20:45:04.059212 systemd[1]: Starting ignition-files.service... Oct 2 20:45:04.093975 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 20:45:04.193933 ignition[825]: INFO : Ignition 2.14.0 Oct 2 20:45:04.193933 ignition[825]: INFO : Stage: files Oct 2 20:45:04.193933 ignition[825]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:04.193933 ignition[825]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:04.249854 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (825) Oct 2 20:45:04.156566 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 20:45:04.258882 ignition[825]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:04.258882 ignition[825]: DEBUG : files: compiled without relabeling support, skipping Oct 2 20:45:04.258882 ignition[825]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 20:45:04.258882 ignition[825]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 20:45:04.258882 ignition[825]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 20:45:04.258882 ignition[825]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 20:45:04.258882 ignition[825]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792612016" Oct 2 20:45:04.258882 ignition[825]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792612016": device or resource busy Oct 2 20:45:04.258882 ignition[825]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem792612016", trying btrfs: device or resource busy Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792612016" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem792612016" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem792612016" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem792612016" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Oct 2 20:45:04.258882 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 20:45:04.175068 systemd-networkd[679]: eth0: Gained IPv6LL Oct 2 20:45:04.527844 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 20:45:04.216648 unknown[825]: wrote ssh authorized keys file for user: core Oct 2 20:45:04.557851 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Oct 2 20:45:04.761958 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 20:45:04.785880 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 20:45:04.785880 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 20:45:04.785880 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 20:45:04.909003 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Oct 2 20:45:04.973980 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2300160994" Oct 2 20:45:04.998864 ignition[825]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2300160994": device or resource busy Oct 2 20:45:04.998864 ignition[825]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2300160994", trying btrfs: device or resource busy Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2300160994" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2300160994" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem2300160994" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem2300160994" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:45:04.998864 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 20:45:04.988869 systemd[1]: mnt-oem2300160994.mount: Deactivated successfully. Oct 2 20:45:05.233957 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET result: OK Oct 2 20:45:05.297219 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 20:45:05.320894 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 20:45:05.320894 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:45:05.320894 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 20:45:05.369857 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Oct 2 20:45:05.964607 ignition[825]: DEBUG : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1310400002" Oct 2 20:45:05.988892 ignition[825]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1310400002": device or resource busy Oct 2 20:45:05.988892 ignition[825]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1310400002", trying btrfs: device or resource busy Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1310400002" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1310400002" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem1310400002" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem1310400002" Oct 2 20:45:05.988892 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 20:45:06.422904 kernel: kauditd_printk_skb: 26 callbacks suppressed Oct 2 20:45:06.422953 kernel: audit: type=1130 audit(1696279506.031:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.422980 kernel: audit: type=1130 audit(1696279506.130:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.423003 kernel: audit: type=1130 audit(1696279506.168:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.423023 kernel: audit: type=1131 audit(1696279506.168:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.423038 kernel: audit: type=1130 audit(1696279506.300:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.423059 kernel: audit: type=1131 audit(1696279506.300:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.130000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:05.983579 systemd[1]: mnt-oem1310400002.mount: Deactivated successfully. Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2294315572" Oct 2 20:45:06.437955 ignition[825]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2294315572": device or resource busy Oct 2 20:45:06.437955 ignition[825]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2294315572", trying btrfs: device or resource busy Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2294315572" Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2294315572" Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem2294315572" Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem2294315572" Oct 2 20:45:06.437955 ignition[825]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(19): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(19): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(1a): [started] processing unit "oem-gce.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(1a): [finished] processing unit "oem-gce.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(1b): [started] processing unit "oem-gce-enable-oslogin.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(1b): [finished] processing unit "oem-gce-enable-oslogin.service" Oct 2 20:45:06.437955 ignition[825]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Oct 2 20:45:06.806908 kernel: audit: type=1130 audit(1696279506.445:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.806984 kernel: audit: type=1131 audit(1696279506.599:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.445000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.011737 systemd[1]: mnt-oem2294315572.mount: Deactivated successfully. Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(20): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(21): [started] setting preset to enabled for "prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(22): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(22): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(23): [started] setting preset to enabled for "oem-gce.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(23): [finished] setting preset to enabled for "oem-gce.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(24): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: op(24): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 20:45:06.823086 ignition[825]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:45:06.823086 ignition[825]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 20:45:06.823086 ignition[825]: INFO : files: files passed Oct 2 20:45:07.225042 kernel: audit: type=1131 audit(1696279506.952:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.225081 kernel: audit: type=1131 audit(1696279507.012:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.051000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.165000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.202000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.026404 systemd[1]: Finished ignition-files.service. Oct 2 20:45:07.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.246019 initrd-setup-root-after-ignition[848]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 20:45:07.251000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.267193 iscsid[688]: iscsid shutting down. Oct 2 20:45:07.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.282044 ignition[825]: INFO : Ignition finished successfully Oct 2 20:45:07.287000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.043228 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 20:45:07.304000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.079098 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 20:45:07.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.080160 systemd[1]: Starting ignition-quench.service... Oct 2 20:45:07.347000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.101410 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 20:45:07.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.132322 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 20:45:06.132458 systemd[1]: Finished ignition-quench.service. Oct 2 20:45:06.170365 systemd[1]: Reached target ignition-complete.target. Oct 2 20:45:07.417911 ignition[863]: INFO : Ignition 2.14.0 Oct 2 20:45:07.417911 ignition[863]: INFO : Stage: umount Oct 2 20:45:07.417911 ignition[863]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 20:45:07.417911 ignition[863]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 20:45:07.417911 ignition[863]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 20:45:07.417911 ignition[863]: INFO : umount: umount passed Oct 2 20:45:07.417911 ignition[863]: INFO : Ignition finished successfully Oct 2 20:45:07.442000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.509000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.258100 systemd[1]: Starting initrd-parse-etc.service... Oct 2 20:45:06.298445 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 20:45:06.298560 systemd[1]: Finished initrd-parse-etc.service. Oct 2 20:45:06.302270 systemd[1]: Reached target initrd-fs.target. Oct 2 20:45:06.384955 systemd[1]: Reached target initrd.target. Oct 2 20:45:06.405998 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 20:45:07.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.407179 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 20:45:07.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.624000 audit: BPF prog-id=6 op=UNLOAD Oct 2 20:45:06.430275 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 20:45:06.448395 systemd[1]: Starting initrd-cleanup.service... Oct 2 20:45:06.494014 systemd[1]: Stopped target nss-lookup.target. Oct 2 20:45:07.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.506214 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 20:45:07.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.528266 systemd[1]: Stopped target timers.target. Oct 2 20:45:07.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.553240 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 20:45:06.553440 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 20:45:06.601343 systemd[1]: Stopped target initrd.target. Oct 2 20:45:07.733000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.659347 systemd[1]: Stopped target basic.target. Oct 2 20:45:06.698215 systemd[1]: Stopped target ignition-complete.target. Oct 2 20:45:06.711214 systemd[1]: Stopped target ignition-diskful.target. Oct 2 20:45:07.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.747215 systemd[1]: Stopped target initrd-root-device.target. Oct 2 20:45:07.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.758209 systemd[1]: Stopped target remote-fs.target. Oct 2 20:45:07.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.795195 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 20:45:06.815200 systemd[1]: Stopped target sysinit.target. Oct 2 20:45:06.831230 systemd[1]: Stopped target local-fs.target. Oct 2 20:45:07.852000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.873196 systemd[1]: Stopped target local-fs-pre.target. Oct 2 20:45:07.867000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.904206 systemd[1]: Stopped target swap.target. Oct 2 20:45:07.883000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:07.883000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:06.943127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 20:45:06.943322 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 20:45:06.954422 systemd[1]: Stopped target cryptsetup.target. Oct 2 20:45:07.001088 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 20:45:07.001279 systemd[1]: Stopped dracut-initqueue.service. Oct 2 20:45:07.951871 systemd-journald[189]: Received SIGTERM from PID 1 (systemd). Oct 2 20:45:07.014357 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 20:45:07.014622 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 20:45:07.053244 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 20:45:07.053413 systemd[1]: Stopped ignition-files.service. Oct 2 20:45:07.075766 systemd[1]: Stopping ignition-mount.service... Oct 2 20:45:07.125386 systemd[1]: Stopping iscsid.service... Oct 2 20:45:07.144876 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 20:45:07.145155 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 20:45:07.168393 systemd[1]: Stopping sysroot-boot.service... Oct 2 20:45:07.189003 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 20:45:07.189247 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 20:45:07.204321 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 20:45:07.204501 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 20:45:07.243495 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 20:45:07.244323 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 20:45:07.244445 systemd[1]: Stopped iscsid.service. Oct 2 20:45:07.253531 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 20:45:07.253635 systemd[1]: Stopped ignition-mount.service. Oct 2 20:45:07.275498 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 20:45:07.275604 systemd[1]: Stopped sysroot-boot.service. Oct 2 20:45:07.289453 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 20:45:07.289633 systemd[1]: Stopped ignition-disks.service. Oct 2 20:45:07.306100 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 20:45:07.306172 systemd[1]: Stopped ignition-kargs.service. Oct 2 20:45:07.324055 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 20:45:07.324129 systemd[1]: Stopped ignition-fetch.service. Oct 2 20:45:07.349099 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 20:45:07.349169 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 20:45:07.364049 systemd[1]: Stopped target paths.target. Oct 2 20:45:07.378976 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 20:45:07.382851 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 20:45:07.395965 systemd[1]: Stopped target slices.target. Oct 2 20:45:07.409932 systemd[1]: Stopped target sockets.target. Oct 2 20:45:07.426010 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 20:45:07.426064 systemd[1]: Closed iscsid.socket. Oct 2 20:45:07.433053 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 20:45:07.433118 systemd[1]: Stopped ignition-setup.service. Oct 2 20:45:07.444114 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 20:45:07.444187 systemd[1]: Stopped initrd-setup-root.service. Oct 2 20:45:07.461242 systemd[1]: Stopping iscsiuio.service... Oct 2 20:45:07.493508 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 20:45:07.493625 systemd[1]: Stopped iscsiuio.service. Oct 2 20:45:07.511320 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 20:45:07.511432 systemd[1]: Finished initrd-cleanup.service. Oct 2 20:45:07.527905 systemd[1]: Stopped target network.target. Oct 2 20:45:07.544010 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 20:45:07.544067 systemd[1]: Closed iscsiuio.socket. Oct 2 20:45:07.559202 systemd[1]: Stopping systemd-networkd.service... Oct 2 20:45:07.562796 systemd-networkd[679]: eth0: DHCPv6 lease lost Oct 2 20:45:07.574122 systemd[1]: Stopping systemd-resolved.service... Oct 2 20:45:07.589356 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 20:45:07.589475 systemd[1]: Stopped systemd-resolved.service. Oct 2 20:45:07.601707 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 20:45:07.601857 systemd[1]: Stopped systemd-networkd.service. Oct 2 20:45:07.626577 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 20:45:07.626618 systemd[1]: Closed systemd-networkd.socket. Oct 2 20:45:07.642913 systemd[1]: Stopping network-cleanup.service... Oct 2 20:45:07.655868 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 20:45:07.655976 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 20:45:07.962000 audit: BPF prog-id=9 op=UNLOAD Oct 2 20:45:07.670040 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 20:45:07.670115 systemd[1]: Stopped systemd-sysctl.service. Oct 2 20:45:07.685073 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 20:45:07.685135 systemd[1]: Stopped systemd-modules-load.service. Oct 2 20:45:07.701066 systemd[1]: Stopping systemd-udevd.service... Oct 2 20:45:07.716393 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 20:45:07.717038 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 20:45:07.717191 systemd[1]: Stopped systemd-udevd.service. Oct 2 20:45:07.736550 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 20:45:07.736637 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 20:45:07.750029 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 20:45:07.750086 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 20:45:07.766011 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 20:45:07.766078 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 20:45:07.782078 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 20:45:07.782143 systemd[1]: Stopped dracut-cmdline.service. Oct 2 20:45:07.798078 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 20:45:07.798141 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 20:45:07.815012 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 20:45:07.838857 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 20:45:07.838987 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 20:45:07.854424 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 20:45:07.854554 systemd[1]: Stopped network-cleanup.service. Oct 2 20:45:07.869220 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 20:45:07.869329 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 20:45:07.885259 systemd[1]: Reached target initrd-switch-root.target. Oct 2 20:45:07.900892 systemd[1]: Starting initrd-switch-root.service... Oct 2 20:45:07.917378 systemd[1]: Switching root. Oct 2 20:45:07.966278 systemd-journald[189]: Journal stopped Oct 2 20:45:12.681691 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 20:45:12.681813 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 20:45:12.681839 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 20:45:12.681868 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 20:45:12.681891 kernel: SELinux: policy capability open_perms=1 Oct 2 20:45:12.681919 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 20:45:12.681953 kernel: SELinux: policy capability always_check_network=0 Oct 2 20:45:12.681984 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 20:45:12.682008 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 20:45:12.682037 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 20:45:12.682060 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 20:45:12.682086 systemd[1]: Successfully loaded SELinux policy in 115.640ms. Oct 2 20:45:12.682134 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.436ms. Oct 2 20:45:12.682160 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 20:45:12.682189 systemd[1]: Detected virtualization kvm. Oct 2 20:45:12.682213 systemd[1]: Detected architecture x86-64. Oct 2 20:45:12.682237 systemd[1]: Detected first boot. Oct 2 20:45:12.682262 systemd[1]: Initializing machine ID from VM UUID. Oct 2 20:45:12.682284 systemd[1]: Populated /etc with preset unit settings. Oct 2 20:45:12.682308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:45:12.682349 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:45:12.682382 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:45:12.682412 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 20:45:12.682435 kernel: audit: type=1334 audit(1696279511.795:86): prog-id=12 op=LOAD Oct 2 20:45:12.682457 kernel: audit: type=1334 audit(1696279511.795:87): prog-id=3 op=UNLOAD Oct 2 20:45:12.682480 kernel: audit: type=1334 audit(1696279511.807:88): prog-id=13 op=LOAD Oct 2 20:45:12.682502 kernel: audit: type=1334 audit(1696279511.821:89): prog-id=14 op=LOAD Oct 2 20:45:12.682523 kernel: audit: type=1334 audit(1696279511.821:90): prog-id=4 op=UNLOAD Oct 2 20:45:12.682548 kernel: audit: type=1334 audit(1696279511.821:91): prog-id=5 op=UNLOAD Oct 2 20:45:12.682574 kernel: audit: type=1334 audit(1696279511.828:92): prog-id=15 op=LOAD Oct 2 20:45:12.682597 kernel: audit: type=1334 audit(1696279511.828:93): prog-id=12 op=UNLOAD Oct 2 20:45:12.682619 kernel: audit: type=1334 audit(1696279511.835:94): prog-id=16 op=LOAD Oct 2 20:45:12.682641 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 20:45:12.682665 kernel: audit: type=1334 audit(1696279511.842:95): prog-id=17 op=LOAD Oct 2 20:45:12.682688 systemd[1]: Stopped initrd-switch-root.service. Oct 2 20:45:12.682711 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 20:45:12.682750 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 20:45:12.682773 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 20:45:12.682801 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 20:45:12.682825 systemd[1]: Created slice system-getty.slice. Oct 2 20:45:12.682848 systemd[1]: Created slice system-modprobe.slice. Oct 2 20:45:12.682873 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 20:45:12.682896 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 20:45:12.682920 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 20:45:12.682944 systemd[1]: Created slice user.slice. Oct 2 20:45:12.682973 systemd[1]: Started systemd-ask-password-console.path. Oct 2 20:45:12.682997 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 20:45:12.683021 systemd[1]: Set up automount boot.automount. Oct 2 20:45:12.683047 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 20:45:12.683074 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 20:45:12.683103 systemd[1]: Stopped target initrd-fs.target. Oct 2 20:45:12.683126 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 20:45:12.683147 systemd[1]: Reached target integritysetup.target. Oct 2 20:45:12.683171 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 20:45:12.683200 systemd[1]: Reached target remote-fs.target. Oct 2 20:45:12.683223 systemd[1]: Reached target slices.target. Oct 2 20:45:12.683247 systemd[1]: Reached target swap.target. Oct 2 20:45:12.683271 systemd[1]: Reached target torcx.target. Oct 2 20:45:12.683294 systemd[1]: Reached target veritysetup.target. Oct 2 20:45:12.683319 systemd[1]: Listening on systemd-coredump.socket. Oct 2 20:45:12.683350 systemd[1]: Listening on systemd-initctl.socket. Oct 2 20:45:12.683373 systemd[1]: Listening on systemd-networkd.socket. Oct 2 20:45:12.683397 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 20:45:12.683420 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 20:45:12.683448 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 20:45:12.683473 systemd[1]: Mounting dev-hugepages.mount... Oct 2 20:45:12.683497 systemd[1]: Mounting dev-mqueue.mount... Oct 2 20:45:12.683521 systemd[1]: Mounting media.mount... Oct 2 20:45:12.683542 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:45:12.683563 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 20:45:12.683590 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 20:45:12.683614 systemd[1]: Mounting tmp.mount... Oct 2 20:45:12.683630 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 20:45:12.683648 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 20:45:12.683663 systemd[1]: Starting kmod-static-nodes.service... Oct 2 20:45:12.683678 systemd[1]: Starting modprobe@configfs.service... Oct 2 20:45:12.683692 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 20:45:12.683706 systemd[1]: Starting modprobe@drm.service... Oct 2 20:45:12.683767 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 20:45:12.683792 systemd[1]: Starting modprobe@fuse.service... Oct 2 20:45:12.683813 systemd[1]: Starting modprobe@loop.service... Oct 2 20:45:12.683828 kernel: fuse: init (API version 7.34) Oct 2 20:45:12.683847 kernel: loop: module loaded Oct 2 20:45:12.683862 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 20:45:12.683877 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 20:45:12.683891 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 20:45:12.683906 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 20:45:12.683921 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 20:45:12.683935 systemd[1]: Stopped systemd-journald.service. Oct 2 20:45:12.683950 systemd[1]: Starting systemd-journald.service... Oct 2 20:45:12.683965 systemd[1]: Starting systemd-modules-load.service... Oct 2 20:45:12.683983 systemd[1]: Starting systemd-network-generator.service... Oct 2 20:45:12.684013 systemd-journald[987]: Journal started Oct 2 20:45:12.684082 systemd-journald[987]: Runtime Journal (/run/log/journal/e9c9bbac379ff0533953ac7b36188581) is 8.0M, max 148.8M, 140.8M free. Oct 2 20:45:08.273000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 20:45:08.420000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:45:08.420000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 20:45:08.420000 audit: BPF prog-id=10 op=LOAD Oct 2 20:45:08.420000 audit: BPF prog-id=10 op=UNLOAD Oct 2 20:45:08.420000 audit: BPF prog-id=11 op=LOAD Oct 2 20:45:08.420000 audit: BPF prog-id=11 op=UNLOAD Oct 2 20:45:11.795000 audit: BPF prog-id=12 op=LOAD Oct 2 20:45:11.795000 audit: BPF prog-id=3 op=UNLOAD Oct 2 20:45:11.807000 audit: BPF prog-id=13 op=LOAD Oct 2 20:45:11.821000 audit: BPF prog-id=14 op=LOAD Oct 2 20:45:11.821000 audit: BPF prog-id=4 op=UNLOAD Oct 2 20:45:11.821000 audit: BPF prog-id=5 op=UNLOAD Oct 2 20:45:11.828000 audit: BPF prog-id=15 op=LOAD Oct 2 20:45:11.828000 audit: BPF prog-id=12 op=UNLOAD Oct 2 20:45:11.835000 audit: BPF prog-id=16 op=LOAD Oct 2 20:45:11.842000 audit: BPF prog-id=17 op=LOAD Oct 2 20:45:11.842000 audit: BPF prog-id=13 op=UNLOAD Oct 2 20:45:11.842000 audit: BPF prog-id=14 op=UNLOAD Oct 2 20:45:11.844000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:11.884000 audit: BPF prog-id=15 op=UNLOAD Oct 2 20:45:11.893000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:11.893000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.632000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.633000 audit: BPF prog-id=18 op=LOAD Oct 2 20:45:12.633000 audit: BPF prog-id=19 op=LOAD Oct 2 20:45:12.633000 audit: BPF prog-id=20 op=LOAD Oct 2 20:45:12.633000 audit: BPF prog-id=16 op=UNLOAD Oct 2 20:45:12.633000 audit: BPF prog-id=17 op=UNLOAD Oct 2 20:45:12.677000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 20:45:12.677000 audit[987]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff98159bd0 a2=4000 a3=7fff98159c6c items=0 ppid=1 pid=987 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:12.677000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 20:45:11.794746 systemd[1]: Queued start job for default target multi-user.target. Oct 2 20:45:08.604921 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:45:11.845328 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 20:45:08.606067 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:45:08.606092 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:45:08.606153 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 20:45:08.606167 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 20:45:08.606213 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 20:45:08.606229 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 20:45:08.606465 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 20:45:08.606520 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 20:45:08.606537 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 20:45:08.607556 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 20:45:08.607599 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 20:45:08.607621 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 20:45:08.607639 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 20:45:08.607659 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 20:45:08.607675 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:08Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 20:45:11.203888 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:45:11.204213 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:45:11.204345 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:45:11.204570 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 20:45:11.204628 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 20:45:11.204699 /usr/lib/systemd/system-generators/torcx-generator[896]: time="2023-10-02T20:45:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 20:45:12.692934 systemd[1]: Starting systemd-remount-fs.service... Oct 2 20:45:12.707764 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 20:45:12.721749 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 20:45:12.727769 systemd[1]: Stopped verity-setup.service. Oct 2 20:45:12.732000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.746798 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 20:45:12.755784 systemd[1]: Started systemd-journald.service. Oct 2 20:45:12.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.765340 systemd[1]: Mounted dev-hugepages.mount. Oct 2 20:45:12.773091 systemd[1]: Mounted dev-mqueue.mount. Oct 2 20:45:12.780092 systemd[1]: Mounted media.mount. Oct 2 20:45:12.787041 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 20:45:12.796008 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 20:45:12.805067 systemd[1]: Mounted tmp.mount. Oct 2 20:45:12.812189 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 20:45:12.819000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.821239 systemd[1]: Finished kmod-static-nodes.service. Oct 2 20:45:12.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.830279 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 20:45:12.830492 systemd[1]: Finished modprobe@configfs.service. Oct 2 20:45:12.837000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.837000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.839487 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 20:45:12.839708 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 20:45:12.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.848310 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 20:45:12.848535 systemd[1]: Finished modprobe@drm.service. Oct 2 20:45:12.855000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.855000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.857295 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 20:45:12.857507 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 20:45:12.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.866282 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 20:45:12.866479 systemd[1]: Finished modprobe@fuse.service. Oct 2 20:45:12.873000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.873000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.875284 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 20:45:12.875482 systemd[1]: Finished modprobe@loop.service. Oct 2 20:45:12.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.882000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.884311 systemd[1]: Finished systemd-modules-load.service. Oct 2 20:45:12.891000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.893351 systemd[1]: Finished systemd-network-generator.service. Oct 2 20:45:12.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.903354 systemd[1]: Finished systemd-remount-fs.service. Oct 2 20:45:12.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.912350 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 20:45:12.919000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.921694 systemd[1]: Reached target network-pre.target. Oct 2 20:45:12.931385 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 20:45:12.942376 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 20:45:12.950901 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 20:45:12.953764 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 20:45:12.962712 systemd[1]: Starting systemd-journal-flush.service... Oct 2 20:45:12.972747 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 20:45:12.973193 systemd-journald[987]: Time spent on flushing to /var/log/journal/e9c9bbac379ff0533953ac7b36188581 is 90.099ms for 1148 entries. Oct 2 20:45:12.973193 systemd-journald[987]: System Journal (/var/log/journal/e9c9bbac379ff0533953ac7b36188581) is 8.0M, max 584.8M, 576.8M free. Oct 2 20:45:13.102710 systemd-journald[987]: Received client request to flush runtime journal. Oct 2 20:45:13.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:12.974683 systemd[1]: Starting systemd-random-seed.service... Oct 2 20:45:12.988358 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 20:45:12.990255 systemd[1]: Starting systemd-sysctl.service... Oct 2 20:45:12.999745 systemd[1]: Starting systemd-sysusers.service... Oct 2 20:45:13.104217 udevadm[1001]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 20:45:13.008538 systemd[1]: Starting systemd-udev-settle.service... Oct 2 20:45:13.019503 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 20:45:13.028033 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 20:45:13.037291 systemd[1]: Finished systemd-random-seed.service. Oct 2 20:45:13.046294 systemd[1]: Finished systemd-sysctl.service. Oct 2 20:45:13.058591 systemd[1]: Reached target first-boot-complete.target. Oct 2 20:45:13.095884 systemd[1]: Finished systemd-sysusers.service. Oct 2 20:45:13.104151 systemd[1]: Finished systemd-journal-flush.service. Oct 2 20:45:13.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.694545 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 20:45:13.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.701000 audit: BPF prog-id=21 op=LOAD Oct 2 20:45:13.702000 audit: BPF prog-id=22 op=LOAD Oct 2 20:45:13.702000 audit: BPF prog-id=7 op=UNLOAD Oct 2 20:45:13.702000 audit: BPF prog-id=8 op=UNLOAD Oct 2 20:45:13.704764 systemd[1]: Starting systemd-udevd.service... Oct 2 20:45:13.727564 systemd-udevd[1004]: Using default interface naming scheme 'v252'. Oct 2 20:45:13.779837 systemd[1]: Started systemd-udevd.service. Oct 2 20:45:13.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.790000 audit: BPF prog-id=23 op=LOAD Oct 2 20:45:13.793563 systemd[1]: Starting systemd-networkd.service... Oct 2 20:45:13.804000 audit: BPF prog-id=24 op=LOAD Oct 2 20:45:13.804000 audit: BPF prog-id=25 op=LOAD Oct 2 20:45:13.804000 audit: BPF prog-id=26 op=LOAD Oct 2 20:45:13.807666 systemd[1]: Starting systemd-userdbd.service... Oct 2 20:45:13.853435 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 20:45:13.890555 systemd[1]: Started systemd-userdbd.service. Oct 2 20:45:13.897000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:13.951843 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 20:45:13.981763 kernel: ACPI: button: Power Button [PWRF] Oct 2 20:45:13.996487 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 2 20:45:14.019000 audit[1013]: AVC avc: denied { confidentiality } for pid=1013 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 20:45:14.049816 kernel: ACPI: button: Sleep Button [SLPF] Oct 2 20:45:14.019000 audit[1013]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5598b63a5b70 a1=32194 a2=7fb484a76bc5 a3=5 items=106 ppid=1004 pid=1013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:14.057497 systemd-networkd[1018]: lo: Link UP Oct 2 20:45:14.057506 systemd-networkd[1018]: lo: Gained carrier Oct 2 20:45:14.058229 systemd-networkd[1018]: Enumeration completed Oct 2 20:45:14.058384 systemd[1]: Started systemd-networkd.service. Oct 2 20:45:14.058665 systemd-networkd[1018]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 20:45:14.019000 audit: CWD cwd="/" Oct 2 20:45:14.019000 audit: PATH item=0 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=1 name=(null) inode=14620 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.060797 systemd-networkd[1018]: eth0: Link UP Oct 2 20:45:14.060805 systemd-networkd[1018]: eth0: Gained carrier Oct 2 20:45:14.019000 audit: PATH item=2 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=3 name=(null) inode=14621 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=4 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=5 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=6 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=7 name=(null) inode=14623 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=8 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=9 name=(null) inode=14624 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=10 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=11 name=(null) inode=14625 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=12 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=13 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=14 name=(null) inode=14622 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=15 name=(null) inode=14627 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=16 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=17 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=18 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=19 name=(null) inode=14629 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=20 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=21 name=(null) inode=14630 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=22 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=23 name=(null) inode=14631 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=24 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=25 name=(null) inode=14632 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=26 name=(null) inode=14628 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=27 name=(null) inode=14633 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=28 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=29 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=30 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=31 name=(null) inode=14635 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=32 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=33 name=(null) inode=14636 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=34 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.019000 audit: PATH item=35 name=(null) inode=14637 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=36 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=37 name=(null) inode=14638 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=38 name=(null) inode=14634 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=39 name=(null) inode=14639 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=40 name=(null) inode=14619 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=41 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=42 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=43 name=(null) inode=14641 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=44 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=45 name=(null) inode=14642 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=46 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=47 name=(null) inode=14643 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=48 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=49 name=(null) inode=14644 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=50 name=(null) inode=14640 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=51 name=(null) inode=14645 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=52 name=(null) inode=40 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=53 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=54 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=55 name=(null) inode=14647 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=56 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=57 name=(null) inode=14648 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.071909 systemd-networkd[1018]: eth0: DHCPv4 address 10.128.0.25/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 20:45:14.019000 audit: PATH item=58 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=59 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=60 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=61 name=(null) inode=14650 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=62 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=63 name=(null) inode=14651 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=64 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=65 name=(null) inode=14652 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=66 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=67 name=(null) inode=14653 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=68 name=(null) inode=14649 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=69 name=(null) inode=14654 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=70 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=71 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=72 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=73 name=(null) inode=14656 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=74 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=75 name=(null) inode=14657 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=76 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=77 name=(null) inode=14658 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=78 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=79 name=(null) inode=14659 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=80 name=(null) inode=14655 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=81 name=(null) inode=14660 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=82 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=83 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=84 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=85 name=(null) inode=14662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=86 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=87 name=(null) inode=14663 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=88 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=89 name=(null) inode=14664 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=90 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=91 name=(null) inode=14665 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=92 name=(null) inode=14661 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=93 name=(null) inode=14666 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=94 name=(null) inode=14646 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=95 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=96 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=97 name=(null) inode=14668 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=98 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=99 name=(null) inode=14669 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=100 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=101 name=(null) inode=14670 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=102 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=103 name=(null) inode=14671 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=104 name=(null) inode=14667 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PATH item=105 name=(null) inode=14672 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 20:45:14.019000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 20:45:14.109758 kernel: EDAC MC: Ver: 3.0.0 Oct 2 20:45:14.133014 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1007) Oct 2 20:45:14.150760 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 2 20:45:14.171795 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 2 20:45:14.190769 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 20:45:14.192428 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 20:45:14.207249 systemd[1]: Finished systemd-udev-settle.service. Oct 2 20:45:14.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.217512 systemd[1]: Starting lvm2-activation-early.service... Oct 2 20:45:14.250481 lvm[1041]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:45:14.283108 systemd[1]: Finished lvm2-activation-early.service. Oct 2 20:45:14.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.292112 systemd[1]: Reached target cryptsetup.target. Oct 2 20:45:14.302378 systemd[1]: Starting lvm2-activation.service... Oct 2 20:45:14.308661 lvm[1042]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 20:45:14.337061 systemd[1]: Finished lvm2-activation.service. Oct 2 20:45:14.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.346052 systemd[1]: Reached target local-fs-pre.target. Oct 2 20:45:14.354912 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 20:45:14.354967 systemd[1]: Reached target local-fs.target. Oct 2 20:45:14.363921 systemd[1]: Reached target machines.target. Oct 2 20:45:14.373504 systemd[1]: Starting ldconfig.service... Oct 2 20:45:14.382255 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 20:45:14.382352 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:45:14.384193 systemd[1]: Starting systemd-boot-update.service... Oct 2 20:45:14.392540 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 20:45:14.403707 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 20:45:14.404175 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:45:14.404284 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 20:45:14.406120 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 20:45:14.406910 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1044 (bootctl) Oct 2 20:45:14.410447 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 20:45:14.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.427811 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 20:45:14.460232 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 20:45:14.471769 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 20:45:14.489946 systemd-tmpfiles[1048]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 20:45:14.570089 systemd-fsck[1053]: fsck.fat 4.2 (2021-01-31) Oct 2 20:45:14.570089 systemd-fsck[1053]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 20:45:14.574019 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 20:45:14.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.589341 systemd[1]: Mounting boot.mount... Oct 2 20:45:14.654363 systemd[1]: Mounted boot.mount. Oct 2 20:45:14.678428 systemd[1]: Finished systemd-boot-update.service. Oct 2 20:45:14.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.829334 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 20:45:14.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.840560 systemd[1]: Starting audit-rules.service... Oct 2 20:45:14.849339 systemd[1]: Starting clean-ca-certificates.service... Oct 2 20:45:14.860874 systemd[1]: Starting oem-gce-enable-oslogin.service... Oct 2 20:45:14.869400 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 20:45:14.877000 audit: BPF prog-id=27 op=LOAD Oct 2 20:45:14.881323 systemd[1]: Starting systemd-resolved.service... Oct 2 20:45:14.887000 audit: BPF prog-id=28 op=LOAD Oct 2 20:45:14.890440 systemd[1]: Starting systemd-timesyncd.service... Oct 2 20:45:14.898817 systemd[1]: Starting systemd-update-utmp.service... Oct 2 20:45:14.906937 systemd[1]: Finished clean-ca-certificates.service. Oct 2 20:45:14.905000 audit[1074]: SYSTEM_BOOT pid=1074 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.918061 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 20:45:14.922585 systemd[1]: Finished systemd-update-utmp.service. Oct 2 20:45:14.929000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.945590 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Oct 2 20:45:14.945889 systemd[1]: Finished oem-gce-enable-oslogin.service. Oct 2 20:45:14.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:14.952000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:15.037867 systemd[1]: Started systemd-timesyncd.service. Oct 2 20:45:15.039662 systemd-timesyncd[1072]: Contacted time server 169.254.169.254:123 (169.254.169.254). Oct 2 20:45:15.040205 systemd-timesyncd[1072]: Initial clock synchronization to Mon 2023-10-02 20:45:15.277527 UTC. Oct 2 20:45:15.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:15.046191 systemd[1]: Reached target time-set.target. Oct 2 20:45:15.049998 systemd-resolved[1070]: Positive Trust Anchors: Oct 2 20:45:15.050382 systemd-resolved[1070]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 20:45:15.050666 systemd-resolved[1070]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 20:45:15.055296 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 20:45:15.056886 augenrules[1089]: No rules Oct 2 20:45:15.055000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 20:45:15.055000 audit[1089]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffc313fe2a0 a2=420 a3=0 items=0 ppid=1058 pid=1089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:15.055000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 20:45:15.065345 systemd[1]: Finished audit-rules.service. Oct 2 20:45:15.094428 systemd-resolved[1070]: Defaulting to hostname 'linux'. Oct 2 20:45:15.097374 systemd[1]: Started systemd-resolved.service. Oct 2 20:45:15.105990 systemd[1]: Reached target network.target. Oct 2 20:45:15.114893 systemd[1]: Reached target nss-lookup.target. Oct 2 20:45:15.199549 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 20:45:15.200370 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 20:45:15.278746 ldconfig[1043]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 20:45:15.284112 systemd[1]: Finished ldconfig.service. Oct 2 20:45:15.292738 systemd[1]: Starting systemd-update-done.service... Oct 2 20:45:15.302267 systemd[1]: Finished systemd-update-done.service. Oct 2 20:45:15.311077 systemd[1]: Reached target sysinit.target. Oct 2 20:45:15.320035 systemd[1]: Started motdgen.path. Oct 2 20:45:15.326961 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 20:45:15.337149 systemd[1]: Started logrotate.timer. Oct 2 20:45:15.344098 systemd[1]: Started mdadm.timer. Oct 2 20:45:15.350933 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 20:45:15.358912 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 20:45:15.358979 systemd[1]: Reached target paths.target. Oct 2 20:45:15.365879 systemd[1]: Reached target timers.target. Oct 2 20:45:15.373318 systemd[1]: Listening on dbus.socket. Oct 2 20:45:15.373873 systemd-networkd[1018]: eth0: Gained IPv6LL Oct 2 20:45:15.382293 systemd[1]: Starting docker.socket... Oct 2 20:45:15.392977 systemd[1]: Listening on sshd.socket. Oct 2 20:45:15.400015 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:45:15.400758 systemd[1]: Listening on docker.socket. Oct 2 20:45:15.408034 systemd[1]: Reached target sockets.target. Oct 2 20:45:15.416881 systemd[1]: Reached target basic.target. Oct 2 20:45:15.423955 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:45:15.424001 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 20:45:15.425568 systemd[1]: Starting containerd.service... Oct 2 20:45:15.434235 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 20:45:15.444636 systemd[1]: Starting dbus.service... Oct 2 20:45:15.454226 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 20:45:15.462595 systemd[1]: Starting extend-filesystems.service... Oct 2 20:45:15.468799 jq[1100]: false Oct 2 20:45:15.470932 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 20:45:15.472860 systemd[1]: Starting motdgen.service... Oct 2 20:45:15.479854 systemd[1]: Starting oem-gce.service... Oct 2 20:45:15.490008 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 20:45:15.498604 systemd[1]: Starting prepare-critools.service... Oct 2 20:45:15.507616 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 20:45:15.518780 systemd[1]: Starting sshd-keygen.service... Oct 2 20:45:15.533182 systemd[1]: Starting systemd-logind.service... Oct 2 20:45:15.539170 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 20:45:15.539277 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Oct 2 20:45:15.540121 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 20:45:15.541388 systemd[1]: Starting update-engine.service... Oct 2 20:45:15.551776 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 20:45:15.564112 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 20:45:15.570190 jq[1123]: true Oct 2 20:45:15.564418 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 20:45:15.571297 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 20:45:15.571585 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 20:45:15.597047 tar[1126]: ./ Oct 2 20:45:15.597047 tar[1126]: ./macvlan Oct 2 20:45:15.599795 mkfs.ext4[1131]: mke2fs 1.46.5 (30-Dec-2021) Oct 2 20:45:15.606412 mkfs.ext4[1131]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Oct 2 20:45:15.606412 mkfs.ext4[1131]: Creating filesystem with 262144 4k blocks and 65536 inodes Oct 2 20:45:15.606412 mkfs.ext4[1131]: Filesystem UUID: a22f4bca-a847-47db-b889-e5b675703f4f Oct 2 20:45:15.606412 mkfs.ext4[1131]: Superblock backups stored on blocks: Oct 2 20:45:15.606412 mkfs.ext4[1131]: 32768, 98304, 163840, 229376 Oct 2 20:45:15.606412 mkfs.ext4[1131]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 20:45:15.606412 mkfs.ext4[1131]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 20:45:15.606412 mkfs.ext4[1131]: Creating journal (8192 blocks): done Oct 2 20:45:15.619429 extend-filesystems[1101]: Found sda Oct 2 20:45:15.627028 mkfs.ext4[1131]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 20:45:15.631818 jq[1129]: true Oct 2 20:45:15.641337 extend-filesystems[1101]: Found sda1 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda2 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda3 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found usr Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda4 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda6 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda7 Oct 2 20:45:15.648918 extend-filesystems[1101]: Found sda9 Oct 2 20:45:15.648918 extend-filesystems[1101]: Checking size of /dev/sda9 Oct 2 20:45:15.768835 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 20:45:15.768923 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Oct 2 20:45:15.769069 umount[1137]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Oct 2 20:45:15.769234 extend-filesystems[1101]: Resized partition /dev/sda9 Oct 2 20:45:15.685203 dbus-daemon[1099]: [system] SELinux support is enabled Oct 2 20:45:15.659916 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 20:45:15.780664 tar[1127]: crictl Oct 2 20:45:15.781345 extend-filesystems[1151]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 20:45:15.698886 dbus-daemon[1099]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1018 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 20:45:15.660168 systemd[1]: Finished motdgen.service. Oct 2 20:45:15.718622 dbus-daemon[1099]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 20:45:15.685452 systemd[1]: Started dbus.service. Oct 2 20:45:15.703591 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 20:45:15.703630 systemd[1]: Reached target system-config.target. Oct 2 20:45:15.712526 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 20:45:15.712561 systemd[1]: Reached target user-config.target. Oct 2 20:45:15.738293 systemd[1]: Starting systemd-hostnamed.service... Oct 2 20:45:15.798957 update_engine[1122]: I1002 20:45:15.798895 1122 main.cc:92] Flatcar Update Engine starting Oct 2 20:45:15.806695 systemd[1]: Started update-engine.service. Oct 2 20:45:15.807473 update_engine[1122]: I1002 20:45:15.807111 1122 update_check_scheduler.cc:74] Next update check in 4m39s Oct 2 20:45:15.817584 systemd[1]: Started locksmithd.service. Oct 2 20:45:15.832771 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Oct 2 20:45:15.848755 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 20:45:15.886364 extend-filesystems[1151]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 2 20:45:15.886364 extend-filesystems[1151]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 2 20:45:15.886364 extend-filesystems[1151]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Oct 2 20:45:15.948998 tar[1126]: ./static Oct 2 20:45:15.949059 extend-filesystems[1101]: Resized filesystem in /dev/sda9 Oct 2 20:45:15.957941 bash[1167]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:45:15.887501 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 20:45:15.887794 systemd[1]: Finished extend-filesystems.service. Oct 2 20:45:15.917490 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 20:45:15.979478 env[1130]: time="2023-10-02T20:45:15.979396065Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 20:45:15.986882 systemd-logind[1121]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 20:45:15.986929 systemd-logind[1121]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 20:45:15.986963 systemd-logind[1121]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 20:45:15.991068 systemd-logind[1121]: New seat seat0. Oct 2 20:45:15.995356 systemd[1]: Started systemd-logind.service. Oct 2 20:45:16.005084 tar[1126]: ./vlan Oct 2 20:45:16.086740 dbus-daemon[1099]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 20:45:16.092899 systemd[1]: Started systemd-hostnamed.service. Oct 2 20:45:16.101090 dbus-daemon[1099]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1153 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 20:45:16.106904 systemd[1]: Starting polkit.service... Oct 2 20:45:16.147099 coreos-metadata[1098]: Oct 02 20:45:16.146 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Oct 2 20:45:16.151073 coreos-metadata[1098]: Oct 02 20:45:16.150 INFO Fetch failed with 404: resource not found Oct 2 20:45:16.151073 coreos-metadata[1098]: Oct 02 20:45:16.150 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Oct 2 20:45:16.152027 coreos-metadata[1098]: Oct 02 20:45:16.151 INFO Fetch successful Oct 2 20:45:16.152027 coreos-metadata[1098]: Oct 02 20:45:16.151 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Oct 2 20:45:16.152797 coreos-metadata[1098]: Oct 02 20:45:16.152 INFO Fetch failed with 404: resource not found Oct 2 20:45:16.153012 coreos-metadata[1098]: Oct 02 20:45:16.152 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Oct 2 20:45:16.153795 coreos-metadata[1098]: Oct 02 20:45:16.153 INFO Fetch failed with 404: resource not found Oct 2 20:45:16.153795 coreos-metadata[1098]: Oct 02 20:45:16.153 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Oct 2 20:45:16.155225 coreos-metadata[1098]: Oct 02 20:45:16.154 INFO Fetch successful Oct 2 20:45:16.157375 unknown[1098]: wrote ssh authorized keys file for user: core Oct 2 20:45:16.176508 env[1130]: time="2023-10-02T20:45:16.176454494Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 20:45:16.176716 env[1130]: time="2023-10-02T20:45:16.176613850Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.178983 update-ssh-keys[1178]: Updated "/home/core/.ssh/authorized_keys" Oct 2 20:45:16.180036 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 20:45:16.187540 env[1130]: time="2023-10-02T20:45:16.187483105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:45:16.187666 env[1130]: time="2023-10-02T20:45:16.187553963Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.188085 env[1130]: time="2023-10-02T20:45:16.188020318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:45:16.188085 env[1130]: time="2023-10-02T20:45:16.188059093Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.188254 env[1130]: time="2023-10-02T20:45:16.188102711Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 20:45:16.188254 env[1130]: time="2023-10-02T20:45:16.188121532Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.188356 env[1130]: time="2023-10-02T20:45:16.188287761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.188786 env[1130]: time="2023-10-02T20:45:16.188753450Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 20:45:16.189094 env[1130]: time="2023-10-02T20:45:16.189056447Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 20:45:16.189185 env[1130]: time="2023-10-02T20:45:16.189094806Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 20:45:16.189250 env[1130]: time="2023-10-02T20:45:16.189215352Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 20:45:16.189250 env[1130]: time="2023-10-02T20:45:16.189237835Z" level=info msg="metadata content store policy set" policy=shared Oct 2 20:45:16.196785 env[1130]: time="2023-10-02T20:45:16.196729675Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 20:45:16.196959 env[1130]: time="2023-10-02T20:45:16.196935068Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 20:45:16.197061 env[1130]: time="2023-10-02T20:45:16.197041498Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 20:45:16.197207 env[1130]: time="2023-10-02T20:45:16.197186887Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197399 env[1130]: time="2023-10-02T20:45:16.197376487Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197507 env[1130]: time="2023-10-02T20:45:16.197486525Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197614 env[1130]: time="2023-10-02T20:45:16.197594419Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197711 env[1130]: time="2023-10-02T20:45:16.197691942Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197872 env[1130]: time="2023-10-02T20:45:16.197848997Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.197979 env[1130]: time="2023-10-02T20:45:16.197957556Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.198077 env[1130]: time="2023-10-02T20:45:16.198057379Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.198187 env[1130]: time="2023-10-02T20:45:16.198167135Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 20:45:16.198437 env[1130]: time="2023-10-02T20:45:16.198414523Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 20:45:16.198690 env[1130]: time="2023-10-02T20:45:16.198666224Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 20:45:16.199364 env[1130]: time="2023-10-02T20:45:16.199336622Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 20:45:16.199509 env[1130]: time="2023-10-02T20:45:16.199486941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.199642 env[1130]: time="2023-10-02T20:45:16.199621596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 20:45:16.199835 env[1130]: time="2023-10-02T20:45:16.199812840Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200024 env[1130]: time="2023-10-02T20:45:16.200001703Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200132 env[1130]: time="2023-10-02T20:45:16.200110778Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200237 env[1130]: time="2023-10-02T20:45:16.200216456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200348 env[1130]: time="2023-10-02T20:45:16.200327505Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200454 env[1130]: time="2023-10-02T20:45:16.200432543Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200562 env[1130]: time="2023-10-02T20:45:16.200542036Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200660 env[1130]: time="2023-10-02T20:45:16.200640574Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.200832 env[1130]: time="2023-10-02T20:45:16.200811952Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 20:45:16.201098 env[1130]: time="2023-10-02T20:45:16.201074330Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.201225 env[1130]: time="2023-10-02T20:45:16.201204402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.201325 env[1130]: time="2023-10-02T20:45:16.201305665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.201438 env[1130]: time="2023-10-02T20:45:16.201417469Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 20:45:16.201545 env[1130]: time="2023-10-02T20:45:16.201520727Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 20:45:16.201635 env[1130]: time="2023-10-02T20:45:16.201615942Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 20:45:16.201763 env[1130]: time="2023-10-02T20:45:16.201729060Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 20:45:16.201911 env[1130]: time="2023-10-02T20:45:16.201890587Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 20:45:16.202397 env[1130]: time="2023-10-02T20:45:16.202298801Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.203250861Z" level=info msg="Connect containerd service" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.203314303Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.204279199Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.204816971Z" level=info msg="Start subscribing containerd event" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.204917453Z" level=info msg="Start recovering state" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.205013602Z" level=info msg="Start event monitor" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.205035155Z" level=info msg="Start snapshots syncer" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.205049577Z" level=info msg="Start cni network conf syncer for default" Oct 2 20:45:16.207585 env[1130]: time="2023-10-02T20:45:16.205062219Z" level=info msg="Start streaming server" Oct 2 20:45:16.220673 env[1130]: time="2023-10-02T20:45:16.220621325Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 20:45:16.221325 env[1130]: time="2023-10-02T20:45:16.221298222Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 20:45:16.222146 systemd[1]: Started containerd.service. Oct 2 20:45:16.222827 env[1130]: time="2023-10-02T20:45:16.222797506Z" level=info msg="containerd successfully booted in 0.293381s" Oct 2 20:45:16.264190 polkitd[1177]: Started polkitd version 121 Oct 2 20:45:16.292522 polkitd[1177]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 20:45:16.292639 polkitd[1177]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 20:45:16.301578 tar[1126]: ./portmap Oct 2 20:45:16.304971 polkitd[1177]: Finished loading, compiling and executing 2 rules Oct 2 20:45:16.305781 dbus-daemon[1099]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 20:45:16.306022 systemd[1]: Started polkit.service. Oct 2 20:45:16.306776 polkitd[1177]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 20:45:16.338862 systemd-hostnamed[1153]: Hostname set to (transient) Oct 2 20:45:16.341933 systemd-resolved[1070]: System hostname changed to 'ci-3510-3-0-9cd53a6b4a1a61641818.c.flatcar-212911.internal'. Oct 2 20:45:16.422260 tar[1126]: ./host-local Oct 2 20:45:16.543217 tar[1126]: ./vrf Oct 2 20:45:16.652870 tar[1126]: ./bridge Oct 2 20:45:16.758829 tar[1126]: ./tuning Oct 2 20:45:16.852677 tar[1126]: ./firewall Oct 2 20:45:16.976720 tar[1126]: ./host-device Oct 2 20:45:17.054403 systemd[1]: Finished prepare-critools.service. Oct 2 20:45:17.071920 tar[1126]: ./sbr Oct 2 20:45:17.119548 tar[1126]: ./loopback Oct 2 20:45:17.180702 tar[1126]: ./dhcp Oct 2 20:45:17.444010 tar[1126]: ./ptp Oct 2 20:45:17.556577 tar[1126]: ./ipvlan Oct 2 20:45:17.625169 tar[1126]: ./bandwidth Oct 2 20:45:17.711701 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 20:45:20.094266 sshd_keygen[1133]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 20:45:20.149220 systemd[1]: Finished sshd-keygen.service. Oct 2 20:45:20.154089 locksmithd[1166]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 20:45:20.160289 systemd[1]: Starting issuegen.service... Oct 2 20:45:20.171013 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 20:45:20.171279 systemd[1]: Finished issuegen.service. Oct 2 20:45:20.181288 systemd[1]: Starting systemd-user-sessions.service... Oct 2 20:45:20.191482 systemd[1]: Finished systemd-user-sessions.service. Oct 2 20:45:20.201869 systemd[1]: Started getty@tty1.service. Oct 2 20:45:20.211343 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 20:45:20.220275 systemd[1]: Reached target getty.target. Oct 2 20:45:21.656171 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Oct 2 20:45:23.432776 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 20:45:23.455226 systemd-nspawn[1208]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Oct 2 20:45:23.455226 systemd-nspawn[1208]: Press ^] three times within 1s to kill container. Oct 2 20:45:23.472789 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 20:45:23.554170 systemd[1]: Started oem-gce.service. Oct 2 20:45:23.563530 systemd[1]: Reached target multi-user.target. Oct 2 20:45:23.574010 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 20:45:23.587051 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 20:45:23.587308 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 20:45:23.597105 systemd[1]: Startup finished in 1.032s (kernel) + 8.315s (initrd) + 15.457s (userspace) = 24.806s. Oct 2 20:45:23.627877 systemd-nspawn[1208]: + '[' -e /etc/default/instance_configs.cfg.template ']' Oct 2 20:45:23.627877 systemd-nspawn[1208]: + echo -e '[InstanceSetup]\nset_host_keys = false' Oct 2 20:45:23.628151 systemd-nspawn[1208]: + /usr/bin/google_instance_setup Oct 2 20:45:24.222158 instance-setup[1214]: INFO Running google_set_multiqueue. Oct 2 20:45:24.236012 instance-setup[1214]: INFO Set channels for eth0 to 2. Oct 2 20:45:24.239876 instance-setup[1214]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Oct 2 20:45:24.241199 instance-setup[1214]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Oct 2 20:45:24.241672 instance-setup[1214]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Oct 2 20:45:24.243127 instance-setup[1214]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Oct 2 20:45:24.243563 instance-setup[1214]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Oct 2 20:45:24.245068 instance-setup[1214]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Oct 2 20:45:24.245471 instance-setup[1214]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Oct 2 20:45:24.246893 instance-setup[1214]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Oct 2 20:45:24.258113 instance-setup[1214]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Oct 2 20:45:24.258289 instance-setup[1214]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Oct 2 20:45:24.296948 systemd-nspawn[1208]: + /usr/bin/google_metadata_script_runner --script-type startup Oct 2 20:45:24.638068 startup-script[1245]: INFO Starting startup scripts. Oct 2 20:45:24.652148 startup-script[1245]: INFO No startup scripts found in metadata. Oct 2 20:45:24.652311 startup-script[1245]: INFO Finished running startup scripts. Oct 2 20:45:24.688326 systemd-nspawn[1208]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Oct 2 20:45:24.688326 systemd-nspawn[1208]: + daemon_pids=() Oct 2 20:45:24.688992 systemd-nspawn[1208]: + for d in accounts clock_skew network Oct 2 20:45:24.688992 systemd-nspawn[1208]: + daemon_pids+=($!) Oct 2 20:45:24.688992 systemd-nspawn[1208]: + for d in accounts clock_skew network Oct 2 20:45:24.689199 systemd-nspawn[1208]: + daemon_pids+=($!) Oct 2 20:45:24.689199 systemd-nspawn[1208]: + for d in accounts clock_skew network Oct 2 20:45:24.689314 systemd-nspawn[1208]: + daemon_pids+=($!) Oct 2 20:45:24.689399 systemd-nspawn[1208]: + NOTIFY_SOCKET=/run/systemd/notify Oct 2 20:45:24.689399 systemd-nspawn[1208]: + /usr/bin/systemd-notify --ready Oct 2 20:45:24.689885 systemd-nspawn[1208]: + /usr/bin/google_clock_skew_daemon Oct 2 20:45:24.690246 systemd-nspawn[1208]: + /usr/bin/google_network_daemon Oct 2 20:45:24.690659 systemd-nspawn[1208]: + /usr/bin/google_accounts_daemon Oct 2 20:45:24.739830 systemd-nspawn[1208]: + wait -n 36 37 38 Oct 2 20:45:25.045312 systemd[1]: Created slice system-sshd.slice. Oct 2 20:45:25.049589 systemd[1]: Started sshd@0-10.128.0.25:22-147.75.109.163:45134.service. Oct 2 20:45:25.365169 google-networking[1250]: INFO Starting Google Networking daemon. Oct 2 20:45:25.389320 sshd[1253]: Accepted publickey for core from 147.75.109.163 port 45134 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:25.392868 sshd[1253]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:25.412106 systemd[1]: Created slice user-500.slice. Oct 2 20:45:25.416299 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 20:45:25.421848 systemd-logind[1121]: New session 1 of user core. Oct 2 20:45:25.436328 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 20:45:25.439040 systemd[1]: Starting user@500.service... Oct 2 20:45:25.471648 (systemd)[1263]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:25.549038 google-clock-skew[1249]: INFO Starting Google Clock Skew daemon. Oct 2 20:45:25.563531 google-clock-skew[1249]: INFO Clock drift token has changed: 0. Oct 2 20:45:25.571902 systemd-nspawn[1208]: hwclock: Cannot access the Hardware Clock via any known method. Oct 2 20:45:25.572270 systemd-nspawn[1208]: hwclock: Use the --verbose option to see the details of our search for an access method. Oct 2 20:45:25.573567 google-clock-skew[1249]: WARNING Failed to sync system time with hardware clock. Oct 2 20:45:25.596160 groupadd[1270]: group added to /etc/group: name=google-sudoers, GID=1000 Oct 2 20:45:25.600786 groupadd[1270]: group added to /etc/gshadow: name=google-sudoers Oct 2 20:45:25.605103 groupadd[1270]: new group: name=google-sudoers, GID=1000 Oct 2 20:45:25.620955 google-accounts[1248]: INFO Starting Google Accounts daemon. Oct 2 20:45:25.644667 systemd[1263]: Queued start job for default target default.target. Oct 2 20:45:25.645651 systemd[1263]: Reached target paths.target. Oct 2 20:45:25.645689 systemd[1263]: Reached target sockets.target. Oct 2 20:45:25.645711 systemd[1263]: Reached target timers.target. Oct 2 20:45:25.645754 systemd[1263]: Reached target basic.target. Oct 2 20:45:25.645839 systemd[1263]: Reached target default.target. Oct 2 20:45:25.645895 systemd[1263]: Startup finished in 157ms. Oct 2 20:45:25.645929 systemd[1]: Started user@500.service. Oct 2 20:45:25.647221 systemd[1]: Started session-1.scope. Oct 2 20:45:25.664107 google-accounts[1248]: WARNING OS Login not installed. Oct 2 20:45:25.665377 google-accounts[1248]: INFO Creating a new user account for 0. Oct 2 20:45:25.672147 systemd-nspawn[1208]: useradd: invalid user name '0': use --badname to ignore Oct 2 20:45:25.672977 google-accounts[1248]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Oct 2 20:45:25.871550 systemd[1]: Started sshd@1-10.128.0.25:22-147.75.109.163:45148.service. Oct 2 20:45:26.157536 sshd[1283]: Accepted publickey for core from 147.75.109.163 port 45148 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:26.159431 sshd[1283]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:26.166036 systemd[1]: Started session-2.scope. Oct 2 20:45:26.166820 systemd-logind[1121]: New session 2 of user core. Oct 2 20:45:26.376947 sshd[1283]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:26.381194 systemd[1]: sshd@1-10.128.0.25:22-147.75.109.163:45148.service: Deactivated successfully. Oct 2 20:45:26.382278 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 20:45:26.383251 systemd-logind[1121]: Session 2 logged out. Waiting for processes to exit. Oct 2 20:45:26.384494 systemd-logind[1121]: Removed session 2. Oct 2 20:45:26.424458 systemd[1]: Started sshd@2-10.128.0.25:22-147.75.109.163:45154.service. Oct 2 20:45:26.715676 sshd[1289]: Accepted publickey for core from 147.75.109.163 port 45154 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:26.717524 sshd[1289]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:26.724115 systemd[1]: Started session-3.scope. Oct 2 20:45:26.724726 systemd-logind[1121]: New session 3 of user core. Oct 2 20:45:26.928091 sshd[1289]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:26.932367 systemd[1]: sshd@2-10.128.0.25:22-147.75.109.163:45154.service: Deactivated successfully. Oct 2 20:45:26.933409 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 20:45:26.934281 systemd-logind[1121]: Session 3 logged out. Waiting for processes to exit. Oct 2 20:45:26.935490 systemd-logind[1121]: Removed session 3. Oct 2 20:45:26.975085 systemd[1]: Started sshd@3-10.128.0.25:22-147.75.109.163:45166.service. Oct 2 20:45:27.265114 sshd[1295]: Accepted publickey for core from 147.75.109.163 port 45166 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:27.267053 sshd[1295]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:27.274222 systemd[1]: Started session-4.scope. Oct 2 20:45:27.275306 systemd-logind[1121]: New session 4 of user core. Oct 2 20:45:27.482927 sshd[1295]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:27.486912 systemd[1]: sshd@3-10.128.0.25:22-147.75.109.163:45166.service: Deactivated successfully. Oct 2 20:45:27.487950 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 20:45:27.488810 systemd-logind[1121]: Session 4 logged out. Waiting for processes to exit. Oct 2 20:45:27.490024 systemd-logind[1121]: Removed session 4. Oct 2 20:45:27.528400 systemd[1]: Started sshd@4-10.128.0.25:22-147.75.109.163:45180.service. Oct 2 20:45:27.816069 sshd[1301]: Accepted publickey for core from 147.75.109.163 port 45180 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:27.818017 sshd[1301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:27.824594 systemd[1]: Started session-5.scope. Oct 2 20:45:27.825435 systemd-logind[1121]: New session 5 of user core. Oct 2 20:45:28.016104 sudo[1304]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 20:45:28.016522 sudo[1304]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:45:28.026014 dbus-daemon[1099]: \xd0\xfd\xfc\xe1\xa4U: received setenforce notice (enforcing=136036592) Oct 2 20:45:28.028316 sudo[1304]: pam_unix(sudo:session): session closed for user root Oct 2 20:45:28.073594 sshd[1301]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:28.078865 systemd[1]: sshd@4-10.128.0.25:22-147.75.109.163:45180.service: Deactivated successfully. Oct 2 20:45:28.080071 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 20:45:28.081042 systemd-logind[1121]: Session 5 logged out. Waiting for processes to exit. Oct 2 20:45:28.082360 systemd-logind[1121]: Removed session 5. Oct 2 20:45:28.119567 systemd[1]: Started sshd@5-10.128.0.25:22-147.75.109.163:45194.service. Oct 2 20:45:28.406631 sshd[1308]: Accepted publickey for core from 147.75.109.163 port 45194 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:28.408927 sshd[1308]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:28.415575 systemd[1]: Started session-6.scope. Oct 2 20:45:28.416532 systemd-logind[1121]: New session 6 of user core. Oct 2 20:45:28.584235 sudo[1312]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 20:45:28.584623 sudo[1312]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:45:28.589032 sudo[1312]: pam_unix(sudo:session): session closed for user root Oct 2 20:45:28.601086 sudo[1311]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 20:45:28.601461 sudo[1311]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:45:28.614398 systemd[1]: Stopping audit-rules.service... Oct 2 20:45:28.615000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:45:28.621972 kernel: kauditd_printk_skb: 182 callbacks suppressed Oct 2 20:45:28.622110 kernel: audit: type=1305 audit(1696279528.615:165): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 20:45:28.622156 auditctl[1315]: No rules Oct 2 20:45:28.623131 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 20:45:28.623393 systemd[1]: Stopped audit-rules.service. Oct 2 20:45:28.615000 audit[1315]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe2360af0 a2=420 a3=0 items=0 ppid=1 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:28.638465 systemd[1]: Starting audit-rules.service... Oct 2 20:45:28.615000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:45:28.677208 kernel: audit: type=1300 audit(1696279528.615:165): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffe2360af0 a2=420 a3=0 items=0 ppid=1 pid=1315 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:28.677382 kernel: audit: type=1327 audit(1696279528.615:165): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 20:45:28.699373 kernel: audit: type=1131 audit(1696279528.622:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.704124 augenrules[1332]: No rules Oct 2 20:45:28.705022 systemd[1]: Finished audit-rules.service. Oct 2 20:45:28.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.712282 sudo[1311]: pam_unix(sudo:session): session closed for user root Oct 2 20:45:28.711000 audit[1311]: USER_END pid=1311 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.728815 kernel: audit: type=1130 audit(1696279528.704:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.728895 kernel: audit: type=1106 audit(1696279528.711:168): pid=1311 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.711000 audit[1311]: CRED_DISP pid=1311 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.776124 kernel: audit: type=1104 audit(1696279528.711:169): pid=1311 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.776563 sshd[1308]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:28.777000 audit[1308]: USER_END pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:28.781437 systemd-logind[1121]: Session 6 logged out. Waiting for processes to exit. Oct 2 20:45:28.783343 systemd[1]: sshd@5-10.128.0.25:22-147.75.109.163:45194.service: Deactivated successfully. Oct 2 20:45:28.784451 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 20:45:28.786431 systemd-logind[1121]: Removed session 6. Oct 2 20:45:28.777000 audit[1308]: CRED_DISP pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:28.819196 systemd[1]: Started sshd@6-10.128.0.25:22-147.75.109.163:45208.service. Oct 2 20:45:28.835235 kernel: audit: type=1106 audit(1696279528.777:170): pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:28.835399 kernel: audit: type=1104 audit(1696279528.777:171): pid=1308 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:28.860588 kernel: audit: type=1131 audit(1696279528.782:172): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.25:22-147.75.109.163:45194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.25:22-147.75.109.163:45194 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:28.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.25:22-147.75.109.163:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:29.110000 audit[1338]: USER_ACCT pid=1338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:29.113135 sshd[1338]: Accepted publickey for core from 147.75.109.163 port 45208 ssh2: RSA SHA256:YZ55TWlzWgADGjAqFmi8snyQcvYt3mTHwCW+5ys0g/Q Oct 2 20:45:29.113000 audit[1338]: CRED_ACQ pid=1338 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:29.113000 audit[1338]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffd3d902410 a2=3 a3=0 items=0 ppid=1 pid=1338 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:29.113000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 20:45:29.114416 sshd[1338]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 20:45:29.120854 systemd[1]: Started session-7.scope. Oct 2 20:45:29.121447 systemd-logind[1121]: New session 7 of user core. Oct 2 20:45:29.128000 audit[1338]: USER_START pid=1338 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:29.130000 audit[1340]: CRED_ACQ pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:29.288000 audit[1341]: USER_ACCT pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:29.290455 sudo[1341]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 20:45:29.288000 audit[1341]: CRED_REFR pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:29.290922 sudo[1341]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 20:45:29.291000 audit[1341]: USER_START pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:29.885521 systemd[1]: Reloading. Oct 2 20:45:30.008835 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2023-10-02T20:45:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:45:30.009372 /usr/lib/systemd/system-generators/torcx-generator[1370]: time="2023-10-02T20:45:30Z" level=info msg="torcx already run" Oct 2 20:45:30.095510 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:45:30.095539 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:45:30.119084 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.228000 audit: BPF prog-id=37 op=LOAD Oct 2 20:45:30.228000 audit: BPF prog-id=35 op=UNLOAD Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.230000 audit: BPF prog-id=38 op=LOAD Oct 2 20:45:30.230000 audit: BPF prog-id=27 op=UNLOAD Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.231000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit: BPF prog-id=39 op=LOAD Oct 2 20:45:30.232000 audit: BPF prog-id=24 op=UNLOAD Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit: BPF prog-id=40 op=LOAD Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.232000 audit: BPF prog-id=41 op=LOAD Oct 2 20:45:30.232000 audit: BPF prog-id=25 op=UNLOAD Oct 2 20:45:30.232000 audit: BPF prog-id=26 op=UNLOAD Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit: BPF prog-id=42 op=LOAD Oct 2 20:45:30.234000 audit: BPF prog-id=32 op=UNLOAD Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit: BPF prog-id=43 op=LOAD Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit: BPF prog-id=44 op=LOAD Oct 2 20:45:30.234000 audit: BPF prog-id=33 op=UNLOAD Oct 2 20:45:30.234000 audit: BPF prog-id=34 op=UNLOAD Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.234000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.235000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.235000 audit: BPF prog-id=45 op=LOAD Oct 2 20:45:30.235000 audit: BPF prog-id=28 op=UNLOAD Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.237000 audit: BPF prog-id=46 op=LOAD Oct 2 20:45:30.237000 audit: BPF prog-id=18 op=UNLOAD Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit: BPF prog-id=47 op=LOAD Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.238000 audit: BPF prog-id=48 op=LOAD Oct 2 20:45:30.238000 audit: BPF prog-id=19 op=UNLOAD Oct 2 20:45:30.238000 audit: BPF prog-id=20 op=UNLOAD Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit: BPF prog-id=49 op=LOAD Oct 2 20:45:30.240000 audit: BPF prog-id=29 op=UNLOAD Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.240000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit: BPF prog-id=50 op=LOAD Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.241000 audit: BPF prog-id=51 op=LOAD Oct 2 20:45:30.241000 audit: BPF prog-id=30 op=UNLOAD Oct 2 20:45:30.241000 audit: BPF prog-id=31 op=UNLOAD Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit: BPF prog-id=52 op=LOAD Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.242000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit: BPF prog-id=53 op=LOAD Oct 2 20:45:30.243000 audit: BPF prog-id=21 op=UNLOAD Oct 2 20:45:30.243000 audit: BPF prog-id=22 op=UNLOAD Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.243000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.244000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:30.244000 audit: BPF prog-id=54 op=LOAD Oct 2 20:45:30.244000 audit: BPF prog-id=23 op=UNLOAD Oct 2 20:45:30.261371 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 20:45:30.269948 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 20:45:30.270949 systemd[1]: Reached target network-online.target. Oct 2 20:45:30.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:30.271000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:30.273292 systemd[1]: Started kubelet.service. Oct 2 20:45:30.293199 systemd[1]: Starting coreos-metadata.service... Oct 2 20:45:30.375195 coreos-metadata[1423]: Oct 02 20:45:30.375 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Oct 2 20:45:30.376988 coreos-metadata[1423]: Oct 02 20:45:30.376 INFO Fetch successful Oct 2 20:45:30.376988 coreos-metadata[1423]: Oct 02 20:45:30.376 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Oct 2 20:45:30.377707 coreos-metadata[1423]: Oct 02 20:45:30.377 INFO Fetch successful Oct 2 20:45:30.377707 coreos-metadata[1423]: Oct 02 20:45:30.377 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Oct 2 20:45:30.378470 coreos-metadata[1423]: Oct 02 20:45:30.378 INFO Fetch successful Oct 2 20:45:30.378586 coreos-metadata[1423]: Oct 02 20:45:30.378 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Oct 2 20:45:30.379267 coreos-metadata[1423]: Oct 02 20:45:30.379 INFO Fetch successful Oct 2 20:45:30.387040 kubelet[1415]: E1002 20:45:30.386990 1415 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 20:45:30.390000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 20:45:30.390539 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 20:45:30.390812 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 20:45:30.393134 systemd[1]: Finished coreos-metadata.service. Oct 2 20:45:30.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:30.839664 systemd[1]: Stopped kubelet.service. Oct 2 20:45:30.839000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:30.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:30.861525 systemd[1]: Reloading. Oct 2 20:45:30.973450 /usr/lib/systemd/system-generators/torcx-generator[1479]: time="2023-10-02T20:45:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 20:45:30.973502 /usr/lib/systemd/system-generators/torcx-generator[1479]: time="2023-10-02T20:45:30Z" level=info msg="torcx already run" Oct 2 20:45:31.074233 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 20:45:31.074260 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 20:45:31.098008 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.187000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.188000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.188000 audit: BPF prog-id=55 op=LOAD Oct 2 20:45:31.188000 audit: BPF prog-id=37 op=UNLOAD Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.189000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.190000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.190000 audit: BPF prog-id=56 op=LOAD Oct 2 20:45:31.190000 audit: BPF prog-id=38 op=UNLOAD Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit: BPF prog-id=57 op=LOAD Oct 2 20:45:31.191000 audit: BPF prog-id=39 op=UNLOAD Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit: BPF prog-id=58 op=LOAD Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.191000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.192000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.192000 audit: BPF prog-id=59 op=LOAD Oct 2 20:45:31.192000 audit: BPF prog-id=40 op=UNLOAD Oct 2 20:45:31.192000 audit: BPF prog-id=41 op=UNLOAD Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.193000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit: BPF prog-id=60 op=LOAD Oct 2 20:45:31.195000 audit: BPF prog-id=42 op=UNLOAD Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit: BPF prog-id=61 op=LOAD Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.195000 audit: BPF prog-id=62 op=LOAD Oct 2 20:45:31.195000 audit: BPF prog-id=43 op=UNLOAD Oct 2 20:45:31.195000 audit: BPF prog-id=44 op=UNLOAD Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.196000 audit: BPF prog-id=63 op=LOAD Oct 2 20:45:31.196000 audit: BPF prog-id=45 op=UNLOAD Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.198000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit: BPF prog-id=64 op=LOAD Oct 2 20:45:31.199000 audit: BPF prog-id=46 op=UNLOAD Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit: BPF prog-id=65 op=LOAD Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.199000 audit: BPF prog-id=66 op=LOAD Oct 2 20:45:31.199000 audit: BPF prog-id=47 op=UNLOAD Oct 2 20:45:31.199000 audit: BPF prog-id=48 op=UNLOAD Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.201000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit: BPF prog-id=67 op=LOAD Oct 2 20:45:31.202000 audit: BPF prog-id=49 op=UNLOAD Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit: BPF prog-id=68 op=LOAD Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.202000 audit: BPF prog-id=69 op=LOAD Oct 2 20:45:31.202000 audit: BPF prog-id=50 op=UNLOAD Oct 2 20:45:31.202000 audit: BPF prog-id=51 op=UNLOAD Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit: BPF prog-id=70 op=LOAD Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.204000 audit: BPF prog-id=71 op=LOAD Oct 2 20:45:31.204000 audit: BPF prog-id=52 op=UNLOAD Oct 2 20:45:31.204000 audit: BPF prog-id=53 op=UNLOAD Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:31.205000 audit: BPF prog-id=72 op=LOAD Oct 2 20:45:31.205000 audit: BPF prog-id=54 op=UNLOAD Oct 2 20:45:31.234544 systemd[1]: Started kubelet.service. Oct 2 20:45:31.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:31.289229 kubelet[1523]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:45:31.289597 kubelet[1523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:45:31.289658 kubelet[1523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:45:31.289838 kubelet[1523]: I1002 20:45:31.289793 1523 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 20:45:31.291295 kubelet[1523]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 20:45:31.291402 kubelet[1523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 20:45:31.291461 kubelet[1523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 20:45:32.303772 kubelet[1523]: I1002 20:45:32.303697 1523 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 20:45:32.303772 kubelet[1523]: I1002 20:45:32.303751 1523 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 20:45:32.304341 kubelet[1523]: I1002 20:45:32.304090 1523 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 20:45:32.306594 kubelet[1523]: I1002 20:45:32.306569 1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 20:45:32.311134 kubelet[1523]: I1002 20:45:32.311080 1523 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 20:45:32.311410 kubelet[1523]: I1002 20:45:32.311374 1523 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 20:45:32.311497 kubelet[1523]: I1002 20:45:32.311482 1523 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 20:45:32.311683 kubelet[1523]: I1002 20:45:32.311516 1523 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 20:45:32.311683 kubelet[1523]: I1002 20:45:32.311543 1523 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 20:45:32.311683 kubelet[1523]: I1002 20:45:32.311676 1523 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:45:32.319297 kubelet[1523]: I1002 20:45:32.319267 1523 kubelet.go:381] "Attempting to sync node with API server" Oct 2 20:45:32.319297 kubelet[1523]: I1002 20:45:32.319298 1523 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 20:45:32.319512 kubelet[1523]: I1002 20:45:32.319323 1523 kubelet.go:281] "Adding apiserver pod source" Oct 2 20:45:32.319512 kubelet[1523]: I1002 20:45:32.319340 1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 20:45:32.320209 kubelet[1523]: E1002 20:45:32.320184 1523 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:32.320338 kubelet[1523]: E1002 20:45:32.320244 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:32.321517 kubelet[1523]: I1002 20:45:32.321482 1523 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 20:45:32.325033 kubelet[1523]: W1002 20:45:32.324992 1523 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 20:45:32.325473 kubelet[1523]: I1002 20:45:32.325433 1523 server.go:1175] "Started kubelet" Oct 2 20:45:32.330000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:32.330000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:45:32.330000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bc49c0 a1=c000afe8a0 a2=c000bc4990 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.330000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:45:32.332373 kubelet[1523]: I1002 20:45:32.332308 1523 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 20:45:32.332586 kubelet[1523]: I1002 20:45:32.332569 1523 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 20:45:32.331000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:32.331000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:45:32.331000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000c32980 a1=c000147a10 a2=c000c22690 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.331000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:45:32.333392 kubelet[1523]: I1002 20:45:32.333243 1523 server.go:438] "Adding debug handlers to kubelet server" Oct 2 20:45:32.333920 kubelet[1523]: E1002 20:45:32.333702 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f54059a83", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 325411459, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 325411459, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.335717 kubelet[1523]: I1002 20:45:32.334180 1523 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 20:45:32.335880 kubelet[1523]: I1002 20:45:32.335861 1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 20:45:32.338181 kubelet[1523]: W1002 20:45:32.338160 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:32.338355 kubelet[1523]: E1002 20:45:32.338339 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:32.339029 kubelet[1523]: W1002 20:45:32.339006 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:32.339183 kubelet[1523]: E1002 20:45:32.339168 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:32.339375 kubelet[1523]: E1002 20:45:32.335688 1523 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 20:45:32.339517 kubelet[1523]: E1002 20:45:32.339502 1523 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 20:45:32.342651 kubelet[1523]: E1002 20:45:32.342625 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:32.342838 kubelet[1523]: I1002 20:45:32.342797 1523 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 20:45:32.344057 kubelet[1523]: I1002 20:45:32.344031 1523 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 20:45:32.351973 kubelet[1523]: E1002 20:45:32.351850 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f54dc5554", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 339483988, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 339483988, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.352339 kubelet[1523]: E1002 20:45:32.352317 1523 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:32.375278 kubelet[1523]: W1002 20:45:32.375240 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:32.375437 kubelet[1523]: E1002 20:45:32.375300 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:32.382318 kubelet[1523]: I1002 20:45:32.382291 1523 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 20:45:32.382512 kubelet[1523]: I1002 20:45:32.382497 1523 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 20:45:32.382626 kubelet[1523]: I1002 20:45:32.382613 1523 state_mem.go:36] "Initialized new in-memory state store" Oct 2 20:45:32.383348 kubelet[1523]: E1002 20:45:32.383236 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.386188 kubelet[1523]: E1002 20:45:32.386083 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.386672 kubelet[1523]: I1002 20:45:32.386653 1523 policy_none.go:49] "None policy: Start" Oct 2 20:45:32.387594 kubelet[1523]: I1002 20:45:32.387567 1523 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 20:45:32.387768 kubelet[1523]: I1002 20:45:32.387754 1523 state_mem.go:35] "Initializing new in-memory state store" Oct 2 20:45:32.388646 kubelet[1523]: E1002 20:45:32.388558 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.396997 systemd[1]: Created slice kubepods.slice. Oct 2 20:45:32.404814 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 20:45:32.404000 audit[1538]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1538 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.404000 audit[1538]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffce0a4ed00 a2=0 a3=7ffce0a4ecec items=0 ppid=1523 pid=1538 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.404000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:45:32.409000 audit[1543]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.409000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe5f6b5960 a2=0 a3=7ffe5f6b594c items=0 ppid=1523 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.409000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:45:32.411857 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 20:45:32.417656 kubelet[1523]: I1002 20:45:32.417611 1523 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 20:45:32.416000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:32.416000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 20:45:32.416000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000ede8d0 a1=c000efe6a8 a2=c000ede8a0 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.416000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 20:45:32.418286 kubelet[1523]: I1002 20:45:32.418116 1523 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 20:45:32.418472 kubelet[1523]: I1002 20:45:32.418364 1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 20:45:32.420230 kubelet[1523]: E1002 20:45:32.419323 1523 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.25\" not found" Oct 2 20:45:32.423053 kubelet[1523]: E1002 20:45:32.422953 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f59b28552", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 420629842, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 420629842, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.415000 audit[1545]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.415000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd4f0d9a80 a2=0 a3=7ffd4f0d9a6c items=0 ppid=1523 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.415000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:45:32.433000 audit[1550]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.433000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffd556807d0 a2=0 a3=7ffd556807bc items=0 ppid=1523 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.433000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 20:45:32.443805 kubelet[1523]: E1002 20:45:32.443757 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.445117 kubelet[1523]: I1002 20:45:32.445078 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:32.447345 kubelet[1523]: E1002 20:45:32.446768 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:32.447345 kubelet[1523]: E1002 20:45:32.446906 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 445040452, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.448327 kubelet[1523]: E1002 20:45:32.448229 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 445047500, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.449671 kubelet[1523]: E1002 20:45:32.449593 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 445051591, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.490000 audit[1555]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.490000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe694f0310 a2=0 a3=7ffe694f02fc items=0 ppid=1523 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.490000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 20:45:32.492000 audit[1556]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.492000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff74bdcc70 a2=0 a3=7fff74bdcc5c items=0 ppid=1523 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.492000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:45:32.499000 audit[1559]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.499000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffe092ecfb0 a2=0 a3=7ffe092ecf9c items=0 ppid=1523 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.499000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:45:32.505000 audit[1562]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.505000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe7add50e0 a2=0 a3=7ffe7add50cc items=0 ppid=1523 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.505000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:45:32.506000 audit[1563]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.506000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffff11e4910 a2=0 a3=7ffff11e48fc items=0 ppid=1523 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.506000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:45:32.509000 audit[1564]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.509000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd528857b0 a2=0 a3=7ffd5288579c items=0 ppid=1523 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.509000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:45:32.512000 audit[1566]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.512000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffcb9591110 a2=0 a3=7ffcb95910fc items=0 ppid=1523 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.512000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:45:32.515000 audit[1568]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.515000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffee8581f50 a2=0 a3=7ffee8581f3c items=0 ppid=1523 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.515000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:45:32.543000 audit[1571]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.543000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff8231d670 a2=0 a3=7fff8231d65c items=0 ppid=1523 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.543000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:45:32.544411 kubelet[1523]: E1002 20:45:32.544362 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.546000 audit[1573]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.546000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffed93c7f20 a2=0 a3=7ffed93c7f0c items=0 ppid=1523 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.546000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:45:32.555367 kubelet[1523]: E1002 20:45:32.554986 1523 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:32.562000 audit[1576]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.562000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffe789ce180 a2=0 a3=7ffe789ce16c items=0 ppid=1523 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:45:32.563572 kubelet[1523]: I1002 20:45:32.563536 1523 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 20:45:32.564000 audit[1577]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.564000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe431104f0 a2=0 a3=7ffe431104dc items=0 ppid=1523 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.564000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 20:45:32.565000 audit[1578]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.565000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffff779280 a2=0 a3=7fffff77926c items=0 ppid=1523 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.565000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:45:32.566000 audit[1579]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.566000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffe005cda90 a2=0 a3=7ffe005cda7c items=0 ppid=1523 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.566000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 20:45:32.567000 audit[1580]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.567000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffec724ea0 a2=0 a3=7fffec724e8c items=0 ppid=1523 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:45:32.569000 audit[1582]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:45:32.569000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdfb161380 a2=0 a3=7ffdfb16136c items=0 ppid=1523 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.569000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:45:32.571000 audit[1583]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.571000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fff6da4b0a0 a2=0 a3=7fff6da4b08c items=0 ppid=1523 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.571000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 20:45:32.573000 audit[1584]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.573000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe30875400 a2=0 a3=7ffe308753ec items=0 ppid=1523 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.573000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 20:45:32.576000 audit[1586]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.576000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffd9f3e5d50 a2=0 a3=7ffd9f3e5d3c items=0 ppid=1523 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.576000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 20:45:32.578000 audit[1587]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.578000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff5b6cff50 a2=0 a3=7fff5b6cff3c items=0 ppid=1523 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.578000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 20:45:32.579000 audit[1588]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.579000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc5cb19660 a2=0 a3=7ffc5cb1964c items=0 ppid=1523 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.579000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 20:45:32.582000 audit[1590]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.582000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7fffb2d47f30 a2=0 a3=7fffb2d47f1c items=0 ppid=1523 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.582000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 20:45:32.585000 audit[1592]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.585000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7fff11245570 a2=0 a3=7fff1124555c items=0 ppid=1523 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.585000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 20:45:32.589000 audit[1594]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.589000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fffd78fa540 a2=0 a3=7fffd78fa52c items=0 ppid=1523 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.589000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 20:45:32.592000 audit[1596]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.592000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fffe6551160 a2=0 a3=7fffe655114c items=0 ppid=1523 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.592000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 20:45:32.597000 audit[1598]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.597000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdf75c2900 a2=0 a3=7ffdf75c28ec items=0 ppid=1523 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.597000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 20:45:32.598822 kubelet[1523]: I1002 20:45:32.598793 1523 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 20:45:32.598936 kubelet[1523]: I1002 20:45:32.598830 1523 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 20:45:32.598936 kubelet[1523]: I1002 20:45:32.598861 1523 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 20:45:32.598936 kubelet[1523]: E1002 20:45:32.598926 1523 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 20:45:32.600000 audit[1599]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.600000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe3c8e2160 a2=0 a3=7ffe3c8e214c items=0 ppid=1523 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.600000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 20:45:32.601450 kubelet[1523]: W1002 20:45:32.601420 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:32.601586 kubelet[1523]: E1002 20:45:32.601568 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:32.601000 audit[1600]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.601000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff86845f80 a2=0 a3=7fff86845f6c items=0 ppid=1523 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.601000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 20:45:32.603000 audit[1601]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:45:32.603000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd6364c4f0 a2=0 a3=7ffd6364c4dc items=0 ppid=1523 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:32.603000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 20:45:32.644579 kubelet[1523]: E1002 20:45:32.644528 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.648672 kubelet[1523]: I1002 20:45:32.648618 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:32.650311 kubelet[1523]: E1002 20:45:32.650279 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:32.650311 kubelet[1523]: E1002 20:45:32.650209 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 648564793, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.651518 kubelet[1523]: E1002 20:45:32.651423 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 648580007, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.731357 kubelet[1523]: E1002 20:45:32.731241 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 648585331, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:32.745589 kubelet[1523]: E1002 20:45:32.745532 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.845873 kubelet[1523]: E1002 20:45:32.845697 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.946542 kubelet[1523]: E1002 20:45:32.946476 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:32.957550 kubelet[1523]: E1002 20:45:32.957482 1523 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:33.047286 kubelet[1523]: E1002 20:45:33.047225 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.051468 kubelet[1523]: I1002 20:45:33.051439 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:33.053289 kubelet[1523]: E1002 20:45:33.053253 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:33.053456 kubelet[1523]: E1002 20:45:33.053236 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 51379942, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.131393 kubelet[1523]: E1002 20:45:33.131175 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 51394019, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.147705 kubelet[1523]: E1002 20:45:33.147648 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.189349 kubelet[1523]: W1002 20:45:33.189305 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:33.189349 kubelet[1523]: E1002 20:45:33.189348 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:33.248254 kubelet[1523]: E1002 20:45:33.248191 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.320972 kubelet[1523]: E1002 20:45:33.320901 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:33.331698 kubelet[1523]: E1002 20:45:33.331591 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 51404770, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.348681 kubelet[1523]: E1002 20:45:33.348635 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.449073 kubelet[1523]: E1002 20:45:33.448923 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.549832 kubelet[1523]: E1002 20:45:33.549771 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.552604 kubelet[1523]: W1002 20:45:33.552567 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:33.552604 kubelet[1523]: E1002 20:45:33.552613 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:33.650660 kubelet[1523]: E1002 20:45:33.650587 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.751538 kubelet[1523]: E1002 20:45:33.751382 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.759431 kubelet[1523]: E1002 20:45:33.759392 1523 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:33.852258 kubelet[1523]: E1002 20:45:33.852181 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:33.855245 kubelet[1523]: I1002 20:45:33.855210 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:33.857126 kubelet[1523]: E1002 20:45:33.857094 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:33.857296 kubelet[1523]: E1002 20:45:33.857100 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 855163157, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.859066 kubelet[1523]: E1002 20:45:33.858960 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 855171107, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.894888 kubelet[1523]: W1002 20:45:33.894824 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:33.894888 kubelet[1523]: E1002 20:45:33.894869 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:33.931256 kubelet[1523]: E1002 20:45:33.931133 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 33, 855175530, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:33.952658 kubelet[1523]: E1002 20:45:33.952563 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.001841 kubelet[1523]: W1002 20:45:34.001708 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:34.001841 kubelet[1523]: E1002 20:45:34.001769 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:34.053272 kubelet[1523]: E1002 20:45:34.053196 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.154263 kubelet[1523]: E1002 20:45:34.154188 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.255230 kubelet[1523]: E1002 20:45:34.255094 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.322034 kubelet[1523]: E1002 20:45:34.321959 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:34.356085 kubelet[1523]: E1002 20:45:34.356024 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.456295 kubelet[1523]: E1002 20:45:34.456194 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.557381 kubelet[1523]: E1002 20:45:34.557236 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.657708 kubelet[1523]: E1002 20:45:34.657645 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.758558 kubelet[1523]: E1002 20:45:34.758494 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.858765 kubelet[1523]: E1002 20:45:34.858612 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:34.959650 kubelet[1523]: E1002 20:45:34.959583 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.060430 kubelet[1523]: E1002 20:45:35.060367 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.161355 kubelet[1523]: E1002 20:45:35.161201 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.262153 kubelet[1523]: E1002 20:45:35.262081 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.323002 kubelet[1523]: E1002 20:45:35.322942 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:35.361707 kubelet[1523]: E1002 20:45:35.361651 1523 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:35.362800 kubelet[1523]: E1002 20:45:35.362759 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.373544 kubelet[1523]: W1002 20:45:35.373499 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:35.373544 kubelet[1523]: E1002 20:45:35.373545 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:35.459701 kubelet[1523]: I1002 20:45:35.459176 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:35.460647 kubelet[1523]: E1002 20:45:35.460616 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:35.460900 kubelet[1523]: E1002 20:45:35.460768 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 35, 459119172, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:35.462005 kubelet[1523]: E1002 20:45:35.461920 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 35, 459135775, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:35.462996 kubelet[1523]: E1002 20:45:35.462968 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.463196 kubelet[1523]: E1002 20:45:35.463114 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 35, 459140738, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:35.563868 kubelet[1523]: E1002 20:45:35.563802 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.664339 kubelet[1523]: E1002 20:45:35.664277 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.765265 kubelet[1523]: E1002 20:45:35.765117 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.866379 kubelet[1523]: E1002 20:45:35.866307 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:35.967151 kubelet[1523]: E1002 20:45:35.967086 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.068013 kubelet[1523]: E1002 20:45:36.067869 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.168663 kubelet[1523]: E1002 20:45:36.168595 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.269226 kubelet[1523]: E1002 20:45:36.269172 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.324231 kubelet[1523]: E1002 20:45:36.324080 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:36.370278 kubelet[1523]: E1002 20:45:36.370230 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.414320 kubelet[1523]: W1002 20:45:36.414278 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:36.414320 kubelet[1523]: E1002 20:45:36.414319 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:36.471247 kubelet[1523]: E1002 20:45:36.471166 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.571925 kubelet[1523]: E1002 20:45:36.571872 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.628317 kubelet[1523]: W1002 20:45:36.628184 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:36.628317 kubelet[1523]: E1002 20:45:36.628229 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:36.672716 kubelet[1523]: E1002 20:45:36.672654 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.773338 kubelet[1523]: E1002 20:45:36.773280 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.805438 kubelet[1523]: W1002 20:45:36.805386 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:36.805438 kubelet[1523]: E1002 20:45:36.805435 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:36.874257 kubelet[1523]: E1002 20:45:36.874187 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:36.975253 kubelet[1523]: E1002 20:45:36.975106 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.075878 kubelet[1523]: E1002 20:45:37.075816 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.176660 kubelet[1523]: E1002 20:45:37.176590 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.277357 kubelet[1523]: E1002 20:45:37.277211 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.324929 kubelet[1523]: E1002 20:45:37.324846 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:37.378253 kubelet[1523]: E1002 20:45:37.378192 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.419163 kubelet[1523]: E1002 20:45:37.419125 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:37.479225 kubelet[1523]: E1002 20:45:37.479162 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.580034 kubelet[1523]: E1002 20:45:37.579867 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.680273 kubelet[1523]: E1002 20:45:37.680204 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.780871 kubelet[1523]: E1002 20:45:37.780809 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.882054 kubelet[1523]: E1002 20:45:37.881904 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:37.982613 kubelet[1523]: E1002 20:45:37.982536 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.083345 kubelet[1523]: E1002 20:45:38.083296 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.184262 kubelet[1523]: E1002 20:45:38.184124 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.284836 kubelet[1523]: E1002 20:45:38.284780 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.325433 kubelet[1523]: E1002 20:45:38.325348 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:38.385622 kubelet[1523]: E1002 20:45:38.385550 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.486646 kubelet[1523]: E1002 20:45:38.486486 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.563870 kubelet[1523]: E1002 20:45:38.563814 1523 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.128.0.25" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 20:45:38.587008 kubelet[1523]: E1002 20:45:38.586949 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.661901 kubelet[1523]: I1002 20:45:38.661833 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:38.663101 kubelet[1523]: E1002 20:45:38.663069 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.25" Oct 2 20:45:38.663274 kubelet[1523]: E1002 20:45:38.663147 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575316e7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.25 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380821223, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 38, 661777219, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575316e7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:38.665018 kubelet[1523]: E1002 20:45:38.664938 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f5753331d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.25 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380828445, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 38, 661790612, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f5753331d" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:38.666201 kubelet[1523]: E1002 20:45:38.666125 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.25.178a653f575344d8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.25", UID:"10.128.0.25", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.25 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.25"}, FirstTimestamp:time.Date(2023, time.October, 2, 20, 45, 32, 380832984, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 20, 45, 38, 661795195, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.25.178a653f575344d8" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 20:45:38.687430 kubelet[1523]: E1002 20:45:38.687366 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.788228 kubelet[1523]: E1002 20:45:38.788100 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.889115 kubelet[1523]: E1002 20:45:38.889039 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:38.989856 kubelet[1523]: E1002 20:45:38.989791 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.090641 kubelet[1523]: E1002 20:45:39.090501 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.191206 kubelet[1523]: E1002 20:45:39.191135 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.291828 kubelet[1523]: E1002 20:45:39.291769 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.326325 kubelet[1523]: E1002 20:45:39.326253 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:39.392582 kubelet[1523]: E1002 20:45:39.392455 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.493544 kubelet[1523]: E1002 20:45:39.493477 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.594181 kubelet[1523]: E1002 20:45:39.594112 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.598826 kubelet[1523]: W1002 20:45:39.598794 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:39.598826 kubelet[1523]: E1002 20:45:39.598830 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.25" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 20:45:39.694827 kubelet[1523]: E1002 20:45:39.694676 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.795459 kubelet[1523]: E1002 20:45:39.795385 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.896467 kubelet[1523]: E1002 20:45:39.896393 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:39.997240 kubelet[1523]: E1002 20:45:39.997098 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.097876 kubelet[1523]: E1002 20:45:40.097813 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.198480 kubelet[1523]: E1002 20:45:40.198417 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.299083 kubelet[1523]: E1002 20:45:40.298951 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.326474 kubelet[1523]: E1002 20:45:40.326406 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:40.399712 kubelet[1523]: E1002 20:45:40.399656 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.500567 kubelet[1523]: E1002 20:45:40.500497 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.535400 kubelet[1523]: W1002 20:45:40.535355 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:40.535400 kubelet[1523]: E1002 20:45:40.535399 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 20:45:40.601097 kubelet[1523]: E1002 20:45:40.600951 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.701937 kubelet[1523]: E1002 20:45:40.701868 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.802428 kubelet[1523]: E1002 20:45:40.802372 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:40.903463 kubelet[1523]: E1002 20:45:40.903301 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.004009 kubelet[1523]: E1002 20:45:41.003939 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.104854 kubelet[1523]: E1002 20:45:41.104766 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.205587 kubelet[1523]: E1002 20:45:41.205443 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.210561 kubelet[1523]: W1002 20:45:41.210517 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:41.210561 kubelet[1523]: E1002 20:45:41.210564 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 20:45:41.305762 kubelet[1523]: E1002 20:45:41.305693 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.327170 kubelet[1523]: E1002 20:45:41.327089 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:41.406652 kubelet[1523]: E1002 20:45:41.406598 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.507776 kubelet[1523]: E1002 20:45:41.507623 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.608595 kubelet[1523]: E1002 20:45:41.608528 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.709600 kubelet[1523]: E1002 20:45:41.709544 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.810420 kubelet[1523]: E1002 20:45:41.810276 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.911035 kubelet[1523]: E1002 20:45:41.910970 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:41.972358 kubelet[1523]: W1002 20:45:41.972312 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:41.972358 kubelet[1523]: E1002 20:45:41.972359 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 20:45:42.011671 kubelet[1523]: E1002 20:45:42.011583 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.112333 kubelet[1523]: E1002 20:45:42.112195 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.212843 kubelet[1523]: E1002 20:45:42.212781 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.306452 kubelet[1523]: I1002 20:45:42.306395 1523 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 20:45:42.313799 kubelet[1523]: E1002 20:45:42.313758 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.328103 kubelet[1523]: E1002 20:45:42.328040 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:42.414388 kubelet[1523]: E1002 20:45:42.414223 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.419570 kubelet[1523]: E1002 20:45:42.419522 1523 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.25\" not found" Oct 2 20:45:42.420226 kubelet[1523]: E1002 20:45:42.420197 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:42.514881 kubelet[1523]: E1002 20:45:42.514830 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.615792 kubelet[1523]: E1002 20:45:42.615709 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.691685 kubelet[1523]: E1002 20:45:42.691511 1523 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.25" not found Oct 2 20:45:42.715977 kubelet[1523]: E1002 20:45:42.715915 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.816228 kubelet[1523]: E1002 20:45:42.816166 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:42.916879 kubelet[1523]: E1002 20:45:42.916828 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.017626 kubelet[1523]: E1002 20:45:43.017448 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.118134 kubelet[1523]: E1002 20:45:43.118046 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.218960 kubelet[1523]: E1002 20:45:43.218900 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.320120 kubelet[1523]: E1002 20:45:43.319809 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.329144 kubelet[1523]: E1002 20:45:43.329084 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:43.420877 kubelet[1523]: E1002 20:45:43.420826 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.521752 kubelet[1523]: E1002 20:45:43.521681 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.622733 kubelet[1523]: E1002 20:45:43.622579 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.723179 kubelet[1523]: E1002 20:45:43.723105 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.741702 kubelet[1523]: E1002 20:45:43.741671 1523 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.25" not found Oct 2 20:45:43.823586 kubelet[1523]: E1002 20:45:43.823529 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:43.924560 kubelet[1523]: E1002 20:45:43.924408 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.024633 kubelet[1523]: E1002 20:45:44.024559 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.125269 kubelet[1523]: E1002 20:45:44.125223 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.226130 kubelet[1523]: E1002 20:45:44.225969 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.326869 kubelet[1523]: E1002 20:45:44.326803 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.330140 kubelet[1523]: E1002 20:45:44.330084 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:44.427535 kubelet[1523]: E1002 20:45:44.427484 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.528192 kubelet[1523]: E1002 20:45:44.528053 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.628983 kubelet[1523]: E1002 20:45:44.628927 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.729645 kubelet[1523]: E1002 20:45:44.729573 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.830215 kubelet[1523]: E1002 20:45:44.830089 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.930524 kubelet[1523]: E1002 20:45:44.930474 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:44.971970 kubelet[1523]: E1002 20:45:44.971910 1523 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.25\" not found" node="10.128.0.25" Oct 2 20:45:45.031193 kubelet[1523]: E1002 20:45:45.031116 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.064432 kubelet[1523]: I1002 20:45:45.064386 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.25" Oct 2 20:45:45.131962 kubelet[1523]: E1002 20:45:45.131795 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.144511 kubelet[1523]: I1002 20:45:45.144469 1523 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.25" Oct 2 20:45:45.232478 kubelet[1523]: E1002 20:45:45.232399 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.254907 sudo[1341]: pam_unix(sudo:session): session closed for user root Oct 2 20:45:45.254000 audit[1341]: USER_END pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.260505 kernel: kauditd_printk_skb: 541 callbacks suppressed Oct 2 20:45:45.260643 kernel: audit: type=1106 audit(1696279545.254:637): pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.257000 audit[1341]: CRED_DISP pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.308350 kernel: audit: type=1104 audit(1696279545.257:638): pid=1341 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.309228 sshd[1338]: pam_unix(sshd:session): session closed for user core Oct 2 20:45:45.310000 audit[1338]: USER_END pid=1338 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:45.318343 systemd[1]: sshd@6-10.128.0.25:22-147.75.109.163:45208.service: Deactivated successfully. Oct 2 20:45:45.319468 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 20:45:45.321640 systemd-logind[1121]: Session 7 logged out. Waiting for processes to exit. Oct 2 20:45:45.323444 systemd-logind[1121]: Removed session 7. Oct 2 20:45:45.330784 kubelet[1523]: E1002 20:45:45.330755 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:45.333389 kubelet[1523]: E1002 20:45:45.333360 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.310000 audit[1338]: CRED_DISP pid=1338 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:45.368375 kernel: audit: type=1106 audit(1696279545.310:639): pid=1338 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:45.368506 kernel: audit: type=1104 audit(1696279545.310:640): pid=1338 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 20:45:45.368558 kernel: audit: type=1131 audit(1696279545.317:641): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.25:22-147.75.109.163:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.25:22-147.75.109.163:45208 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:45.434402 kubelet[1523]: E1002 20:45:45.434246 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.534995 kubelet[1523]: E1002 20:45:45.534928 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.635274 kubelet[1523]: E1002 20:45:45.635217 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.736033 kubelet[1523]: E1002 20:45:45.735889 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.836546 kubelet[1523]: E1002 20:45:45.836481 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:45.937231 kubelet[1523]: E1002 20:45:45.937164 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.037999 kubelet[1523]: E1002 20:45:46.037845 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.138436 kubelet[1523]: E1002 20:45:46.138372 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.238884 kubelet[1523]: E1002 20:45:46.238817 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.332146 kubelet[1523]: E1002 20:45:46.331990 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:46.339340 kubelet[1523]: E1002 20:45:46.339293 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.346409 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 20:45:46.346000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:46.370776 kernel: audit: type=1131 audit(1696279546.346:642): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 20:45:46.397000 audit: BPF prog-id=62 op=UNLOAD Oct 2 20:45:46.397000 audit: BPF prog-id=61 op=UNLOAD Oct 2 20:45:46.412279 kernel: audit: type=1334 audit(1696279546.397:643): prog-id=62 op=UNLOAD Oct 2 20:45:46.412432 kernel: audit: type=1334 audit(1696279546.397:644): prog-id=61 op=UNLOAD Oct 2 20:45:46.412481 kernel: audit: type=1334 audit(1696279546.397:645): prog-id=60 op=UNLOAD Oct 2 20:45:46.397000 audit: BPF prog-id=60 op=UNLOAD Oct 2 20:45:46.440370 kubelet[1523]: E1002 20:45:46.440302 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.540989 kubelet[1523]: E1002 20:45:46.540936 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.25\" not found" Oct 2 20:45:46.641411 kubelet[1523]: I1002 20:45:46.641152 1523 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 20:45:46.642061 env[1130]: time="2023-10-02T20:45:46.641917869Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 20:45:46.642583 kubelet[1523]: I1002 20:45:46.642162 1523 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 20:45:46.642700 kubelet[1523]: E1002 20:45:46.642675 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:47.331781 kubelet[1523]: I1002 20:45:47.331684 1523 apiserver.go:52] "Watching apiserver" Oct 2 20:45:47.332138 kubelet[1523]: E1002 20:45:47.332113 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:47.335075 kubelet[1523]: I1002 20:45:47.335025 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:45:47.335582 kubelet[1523]: I1002 20:45:47.335154 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:45:47.343092 systemd[1]: Created slice kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice. Oct 2 20:45:47.353612 kubelet[1523]: I1002 20:45:47.353558 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cni-path\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.353612 kubelet[1523]: I1002 20:45:47.353615 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-etc-cni-netd\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.353831 kubelet[1523]: I1002 20:45:47.353656 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-xtables-lock\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.353831 kubelet[1523]: I1002 20:45:47.353694 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2xrqr\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-kube-api-access-2xrqr\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.353831 kubelet[1523]: I1002 20:45:47.353755 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-cgroup\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.353831 kubelet[1523]: I1002 20:45:47.353797 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b5b6383-5461-4be0-9516-72cbade21985-cilium-config-path\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.353855 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67741d45-bae5-489f-9412-a62e35d0ff92-xtables-lock\") pod \"kube-proxy-c49c4\" (UID: \"67741d45-bae5-489f-9412-a62e35d0ff92\") " pod="kube-system/kube-proxy-c49c4" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.353900 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-run\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.353953 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-bpf-maps\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.354012 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-hostproc\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.354055 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-lib-modules\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354113 kubelet[1523]: I1002 20:45:47.354111 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-kernel\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354412 kubelet[1523]: I1002 20:45:47.354178 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-hubble-tls\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354412 kubelet[1523]: I1002 20:45:47.354249 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67741d45-bae5-489f-9412-a62e35d0ff92-kube-proxy\") pod \"kube-proxy-c49c4\" (UID: \"67741d45-bae5-489f-9412-a62e35d0ff92\") " pod="kube-system/kube-proxy-c49c4" Oct 2 20:45:47.354412 kubelet[1523]: I1002 20:45:47.354292 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67741d45-bae5-489f-9412-a62e35d0ff92-lib-modules\") pod \"kube-proxy-c49c4\" (UID: \"67741d45-bae5-489f-9412-a62e35d0ff92\") " pod="kube-system/kube-proxy-c49c4" Oct 2 20:45:47.354412 kubelet[1523]: I1002 20:45:47.354360 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f7kqt\" (UniqueName: \"kubernetes.io/projected/67741d45-bae5-489f-9412-a62e35d0ff92-kube-api-access-f7kqt\") pod \"kube-proxy-c49c4\" (UID: \"67741d45-bae5-489f-9412-a62e35d0ff92\") " pod="kube-system/kube-proxy-c49c4" Oct 2 20:45:47.354612 kubelet[1523]: I1002 20:45:47.354403 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b5b6383-5461-4be0-9516-72cbade21985-clustermesh-secrets\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354612 kubelet[1523]: I1002 20:45:47.354470 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-net\") pod \"cilium-dmwds\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " pod="kube-system/cilium-dmwds" Oct 2 20:45:47.354612 kubelet[1523]: I1002 20:45:47.354489 1523 reconciler.go:169] "Reconciler: start to sync state" Oct 2 20:45:47.363077 systemd[1]: Created slice kubepods-besteffort-pod67741d45_bae5_489f_9412_a62e35d0ff92.slice. Oct 2 20:45:47.421418 kubelet[1523]: E1002 20:45:47.421374 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:47.971717 env[1130]: time="2023-10-02T20:45:47.971660124Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c49c4,Uid:67741d45-bae5-489f-9412-a62e35d0ff92,Namespace:kube-system,Attempt:0,}" Oct 2 20:45:48.260341 env[1130]: time="2023-10-02T20:45:48.260031641Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmwds,Uid:2b5b6383-5461-4be0-9516-72cbade21985,Namespace:kube-system,Attempt:0,}" Oct 2 20:45:48.332753 kubelet[1523]: E1002 20:45:48.332689 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:48.445252 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564731277.mount: Deactivated successfully. Oct 2 20:45:48.455102 env[1130]: time="2023-10-02T20:45:48.455039846Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.456476 env[1130]: time="2023-10-02T20:45:48.456432865Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.460612 env[1130]: time="2023-10-02T20:45:48.460529334Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.461797 env[1130]: time="2023-10-02T20:45:48.461758405Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.464969 env[1130]: time="2023-10-02T20:45:48.464931120Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.467457 env[1130]: time="2023-10-02T20:45:48.467400428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.468395 env[1130]: time="2023-10-02T20:45:48.468358934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.477960 env[1130]: time="2023-10-02T20:45:48.477872382Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:48.510947 env[1130]: time="2023-10-02T20:45:48.510712807Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:45:48.510947 env[1130]: time="2023-10-02T20:45:48.510850356Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:45:48.510947 env[1130]: time="2023-10-02T20:45:48.510889471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:45:48.511933 env[1130]: time="2023-10-02T20:45:48.511855573Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/46e1f98a535f99059f8ffd04b12a685d22113d8e59bd5185182ebfde30d0cb24 pid=1617 runtime=io.containerd.runc.v2 Oct 2 20:45:48.521550 env[1130]: time="2023-10-02T20:45:48.519026848Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:45:48.521550 env[1130]: time="2023-10-02T20:45:48.519165737Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:45:48.521550 env[1130]: time="2023-10-02T20:45:48.519246585Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:45:48.521550 env[1130]: time="2023-10-02T20:45:48.519461685Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5 pid=1629 runtime=io.containerd.runc.v2 Oct 2 20:45:48.541958 systemd[1]: Started cri-containerd-46e1f98a535f99059f8ffd04b12a685d22113d8e59bd5185182ebfde30d0cb24.scope. Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.587762 kernel: audit: type=1400 audit(1696279548.564:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.564000 audit: BPF prog-id=73 op=LOAD Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000117c48 a2=10 a3=1c items=0 ppid=1617 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436653166393861353335663939303539663866666430346231326136 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001176b0 a2=3c a3=c items=0 ppid=1617 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436653166393861353335663939303539663866666430346231326136 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.595211 systemd[1]: Started cri-containerd-10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5.scope. Oct 2 20:45:48.570000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.570000 audit: BPF prog-id=74 op=LOAD Oct 2 20:45:48.570000 audit[1638]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001179d8 a2=78 a3=c0003c8880 items=0 ppid=1617 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436653166393861353335663939303539663866666430346231326136 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.604000 audit: BPF prog-id=75 op=LOAD Oct 2 20:45:48.604000 audit[1638]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000117770 a2=78 a3=c0003c88c8 items=0 ppid=1617 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436653166393861353335663939303539663866666430346231326136 Oct 2 20:45:48.605000 audit: BPF prog-id=75 op=UNLOAD Oct 2 20:45:48.605000 audit: BPF prog-id=74 op=UNLOAD Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { perfmon } for pid=1638 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit[1638]: AVC avc: denied { bpf } for pid=1638 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.605000 audit: BPF prog-id=76 op=LOAD Oct 2 20:45:48.605000 audit[1638]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000117c30 a2=78 a3=c0003c8cd8 items=0 ppid=1617 pid=1638 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3436653166393861353335663939303539663866666430346231326136 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.616000 audit: BPF prog-id=77 op=LOAD Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=1629 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130656632613734313232303631383932653239643631646266616163 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=1629 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130656632613734313232303631383932653239643631646266616163 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit: BPF prog-id=78 op=LOAD Oct 2 20:45:48.617000 audit[1652]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c00026ec20 items=0 ppid=1629 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130656632613734313232303631383932653239643631646266616163 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.617000 audit: BPF prog-id=79 op=LOAD Oct 2 20:45:48.617000 audit[1652]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c00026ec68 items=0 ppid=1629 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.617000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130656632613734313232303631383932653239643631646266616163 Oct 2 20:45:48.618000 audit: BPF prog-id=79 op=UNLOAD Oct 2 20:45:48.618000 audit: BPF prog-id=78 op=UNLOAD Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { perfmon } for pid=1652 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit[1652]: AVC avc: denied { bpf } for pid=1652 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:45:48.618000 audit: BPF prog-id=80 op=LOAD Oct 2 20:45:48.618000 audit[1652]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c00026f078 items=0 ppid=1629 pid=1652 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:45:48.618000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130656632613734313232303631383932653239643631646266616163 Oct 2 20:45:48.646426 env[1130]: time="2023-10-02T20:45:48.646357836Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-c49c4,Uid:67741d45-bae5-489f-9412-a62e35d0ff92,Namespace:kube-system,Attempt:0,} returns sandbox id \"46e1f98a535f99059f8ffd04b12a685d22113d8e59bd5185182ebfde30d0cb24\"" Oct 2 20:45:48.652485 kubelet[1523]: E1002 20:45:48.651605 1523 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 20:45:48.653060 env[1130]: time="2023-10-02T20:45:48.652132423Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 20:45:48.656212 env[1130]: time="2023-10-02T20:45:48.656159663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-dmwds,Uid:2b5b6383-5461-4be0-9516-72cbade21985,Namespace:kube-system,Attempt:0,} returns sandbox id \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\"" Oct 2 20:45:49.333637 kubelet[1523]: E1002 20:45:49.333571 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:49.663197 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1075644132.mount: Deactivated successfully. Oct 2 20:45:50.184440 env[1130]: time="2023-10-02T20:45:50.184344198Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:50.187522 env[1130]: time="2023-10-02T20:45:50.187467423Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:50.189881 env[1130]: time="2023-10-02T20:45:50.189835460Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:50.192543 env[1130]: time="2023-10-02T20:45:50.192500070Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:50.193268 env[1130]: time="2023-10-02T20:45:50.193224225Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 20:45:50.194700 kubelet[1523]: E1002 20:45:50.194026 1523 kuberuntime_manager.go:862] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.25.14,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f7kqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-c49c4_kube-system(67741d45-bae5-489f-9412-a62e35d0ff92): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Oct 2 20:45:50.195268 kubelet[1523]: E1002 20:45:50.194123 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-c49c4" podUID=67741d45-bae5-489f-9412-a62e35d0ff92 Oct 2 20:45:50.195330 env[1130]: time="2023-10-02T20:45:50.194952277Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 20:45:50.333961 kubelet[1523]: E1002 20:45:50.333881 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:50.638318 kubelet[1523]: E1002 20:45:50.638116 1523 kuberuntime_manager.go:862] container &Container{Name:kube-proxy,Image:registry.k8s.io/kube-proxy:v1.25.14,Command:[/usr/local/bin/kube-proxy --config=/var/lib/kube-proxy/config.conf --hostname-override=$(NODE_NAME)],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-proxy,ReadOnly:false,MountPath:/var/lib/kube-proxy,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-f7kqt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:*true,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod kube-proxy-c49c4_kube-system(67741d45-bae5-489f-9412-a62e35d0ff92): CreateContainerConfigError: services have not yet been read at least once, cannot construct envvars Oct 2 20:45:50.638597 kubelet[1523]: E1002 20:45:50.638169 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-proxy\" with CreateContainerConfigError: \"services have not yet been read at least once, cannot construct envvars\"" pod="kube-system/kube-proxy-c49c4" podUID=67741d45-bae5-489f-9412-a62e35d0ff92 Oct 2 20:45:51.334647 kubelet[1523]: E1002 20:45:51.334580 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:52.319935 kubelet[1523]: E1002 20:45:52.319885 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:52.335267 kubelet[1523]: E1002 20:45:52.335196 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:52.423562 kubelet[1523]: E1002 20:45:52.423525 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:53.335478 kubelet[1523]: E1002 20:45:53.335394 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:54.336170 kubelet[1523]: E1002 20:45:54.336117 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:55.332927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3466605334.mount: Deactivated successfully. Oct 2 20:45:55.337283 kubelet[1523]: E1002 20:45:55.337191 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:56.337864 kubelet[1523]: E1002 20:45:56.337786 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:57.338010 kubelet[1523]: E1002 20:45:57.337958 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:57.425354 kubelet[1523]: E1002 20:45:57.425309 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:45:58.339098 kubelet[1523]: E1002 20:45:58.339042 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:58.555833 env[1130]: time="2023-10-02T20:45:58.555744629Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:58.558855 env[1130]: time="2023-10-02T20:45:58.558800598Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:58.561146 env[1130]: time="2023-10-02T20:45:58.561103173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:45:58.562013 env[1130]: time="2023-10-02T20:45:58.561959297Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 20:45:58.565122 env[1130]: time="2023-10-02T20:45:58.565068597Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:45:58.578340 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount366427057.mount: Deactivated successfully. Oct 2 20:45:58.587317 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount541518427.mount: Deactivated successfully. Oct 2 20:45:58.593963 env[1130]: time="2023-10-02T20:45:58.593283249Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" Oct 2 20:45:58.594463 env[1130]: time="2023-10-02T20:45:58.594425286Z" level=info msg="StartContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" Oct 2 20:45:58.622779 systemd[1]: Started cri-containerd-1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c.scope. Oct 2 20:45:58.639227 systemd[1]: cri-containerd-1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c.scope: Deactivated successfully. Oct 2 20:45:59.339963 kubelet[1523]: E1002 20:45:59.339888 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:45:59.574852 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c-rootfs.mount: Deactivated successfully. Oct 2 20:46:00.340474 kubelet[1523]: E1002 20:46:00.340399 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:00.630783 env[1130]: time="2023-10-02T20:46:00.630341159Z" level=error msg="get state for 1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" error="context deadline exceeded: unknown" Oct 2 20:46:00.630783 env[1130]: time="2023-10-02T20:46:00.630477516Z" level=warning msg="unknown status" status=0 Oct 2 20:46:00.703054 env[1130]: time="2023-10-02T20:46:00.702979723Z" level=info msg="shim disconnected" id=1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c Oct 2 20:46:00.703342 env[1130]: time="2023-10-02T20:46:00.703294628Z" level=warning msg="cleaning up after shim disconnected" id=1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c namespace=k8s.io Oct 2 20:46:00.703342 env[1130]: time="2023-10-02T20:46:00.703323586Z" level=info msg="cleaning up dead shim" Oct 2 20:46:00.715232 env[1130]: time="2023-10-02T20:46:00.715163648Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:46:00Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1715 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:46:00Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:46:00.715616 env[1130]: time="2023-10-02T20:46:00.715490142Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Oct 2 20:46:00.715912 env[1130]: time="2023-10-02T20:46:00.715849772Z" level=error msg="Failed to pipe stderr of container \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" error="reading from a closed fifo" Oct 2 20:46:00.720931 env[1130]: time="2023-10-02T20:46:00.720847482Z" level=error msg="Failed to pipe stdout of container \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" error="reading from a closed fifo" Oct 2 20:46:00.723611 env[1130]: time="2023-10-02T20:46:00.723540116Z" level=error msg="StartContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:46:00.723944 kubelet[1523]: E1002 20:46:00.723910 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" Oct 2 20:46:00.724097 kubelet[1523]: E1002 20:46:00.724074 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:46:00.724097 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:46:00.724097 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:46:00.724097 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2xrqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:46:00.724435 kubelet[1523]: E1002 20:46:00.724135 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:00.793432 update_engine[1122]: I1002 20:46:00.793344 1122 update_attempter.cc:505] Updating boot flags... Oct 2 20:46:01.341550 kubelet[1523]: E1002 20:46:01.341490 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:01.679164 env[1130]: time="2023-10-02T20:46:01.678827644Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:46:01.699931 env[1130]: time="2023-10-02T20:46:01.699861969Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" Oct 2 20:46:01.701023 env[1130]: time="2023-10-02T20:46:01.700979356Z" level=info msg="StartContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" Oct 2 20:46:01.738684 systemd[1]: Started cri-containerd-390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0.scope. Oct 2 20:46:01.752811 systemd[1]: cri-containerd-390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0.scope: Deactivated successfully. Oct 2 20:46:01.762016 env[1130]: time="2023-10-02T20:46:01.761935876Z" level=info msg="shim disconnected" id=390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0 Oct 2 20:46:01.762016 env[1130]: time="2023-10-02T20:46:01.762005059Z" level=warning msg="cleaning up after shim disconnected" id=390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0 namespace=k8s.io Oct 2 20:46:01.762016 env[1130]: time="2023-10-02T20:46:01.762020837Z" level=info msg="cleaning up dead shim" Oct 2 20:46:01.775091 env[1130]: time="2023-10-02T20:46:01.775013105Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:46:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1768 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:46:01Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:46:01.775453 env[1130]: time="2023-10-02T20:46:01.775356628Z" level=error msg="copy shim log" error="read /proc/self/fd/48: file already closed" Oct 2 20:46:01.775808 env[1130]: time="2023-10-02T20:46:01.775754035Z" level=error msg="Failed to pipe stderr of container \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" error="reading from a closed fifo" Oct 2 20:46:01.777855 env[1130]: time="2023-10-02T20:46:01.777788122Z" level=error msg="Failed to pipe stdout of container \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" error="reading from a closed fifo" Oct 2 20:46:01.780516 env[1130]: time="2023-10-02T20:46:01.780441950Z" level=error msg="StartContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:46:01.780894 kubelet[1523]: E1002 20:46:01.780840 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0" Oct 2 20:46:01.781057 kubelet[1523]: E1002 20:46:01.780984 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:46:01.781057 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:46:01.781057 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:46:01.781057 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2xrqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:46:01.781435 kubelet[1523]: E1002 20:46:01.781039 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:02.342096 kubelet[1523]: E1002 20:46:02.342028 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:02.426763 kubelet[1523]: E1002 20:46:02.426701 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:02.681039 kubelet[1523]: I1002 20:46:02.680295 1523 scope.go:115] "RemoveContainer" containerID="1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" Oct 2 20:46:02.681039 kubelet[1523]: I1002 20:46:02.680640 1523 scope.go:115] "RemoveContainer" containerID="1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" Oct 2 20:46:02.682454 env[1130]: time="2023-10-02T20:46:02.682401553Z" level=info msg="RemoveContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" Oct 2 20:46:02.683504 env[1130]: time="2023-10-02T20:46:02.683006654Z" level=info msg="RemoveContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\"" Oct 2 20:46:02.683800 env[1130]: time="2023-10-02T20:46:02.683745575Z" level=error msg="RemoveContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\" failed" error="failed to set removing state for container \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\": container is already in removing state" Oct 2 20:46:02.683997 kubelet[1523]: E1002 20:46:02.683970 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\": container is already in removing state" containerID="1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" Oct 2 20:46:02.684115 kubelet[1523]: E1002 20:46:02.684034 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c": container is already in removing state; Skipping pod "cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)" Oct 2 20:46:02.684565 kubelet[1523]: E1002 20:46:02.684534 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:02.687869 env[1130]: time="2023-10-02T20:46:02.687829061Z" level=info msg="RemoveContainer for \"1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c\" returns successfully" Oct 2 20:46:02.691063 systemd[1]: run-containerd-runc-k8s.io-390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0-runc.c83H5q.mount: Deactivated successfully. Oct 2 20:46:02.691205 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0-rootfs.mount: Deactivated successfully. Oct 2 20:46:03.342786 kubelet[1523]: E1002 20:46:03.342721 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:03.684486 kubelet[1523]: E1002 20:46:03.684361 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:03.737274 kubelet[1523]: W1002 20:46:03.737197 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice/cri-containerd-1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c.scope WatchSource:0}: container "1f57ae3947bafb16276d6c33f824ba693068a1ecbe0713e0a712c84f6cfd627c" in namespace "k8s.io": not found Oct 2 20:46:04.343791 kubelet[1523]: E1002 20:46:04.343711 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:05.344018 kubelet[1523]: E1002 20:46:05.343934 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:05.602185 env[1130]: time="2023-10-02T20:46:05.601972171Z" level=info msg="CreateContainer within sandbox \"46e1f98a535f99059f8ffd04b12a685d22113d8e59bd5185182ebfde30d0cb24\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 20:46:05.619288 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2046199535.mount: Deactivated successfully. Oct 2 20:46:05.630057 env[1130]: time="2023-10-02T20:46:05.629990996Z" level=info msg="CreateContainer within sandbox \"46e1f98a535f99059f8ffd04b12a685d22113d8e59bd5185182ebfde30d0cb24\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"01690fc2c4ae8b43d98a1030ea85883a58ade5b4dcee7d44288fb8e235e309ab\"" Oct 2 20:46:05.630998 env[1130]: time="2023-10-02T20:46:05.630955899Z" level=info msg="StartContainer for \"01690fc2c4ae8b43d98a1030ea85883a58ade5b4dcee7d44288fb8e235e309ab\"" Oct 2 20:46:05.662285 systemd[1]: Started cri-containerd-01690fc2c4ae8b43d98a1030ea85883a58ade5b4dcee7d44288fb8e235e309ab.scope. Oct 2 20:46:05.691781 kernel: kauditd_printk_skb: 113 callbacks suppressed Oct 2 20:46:05.691962 kernel: audit: type=1400 audit(1696279565.684:682): avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.684000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.684000 audit[1790]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1617 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:05.743662 kernel: audit: type=1300 audit(1696279565.684:682): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=1617 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:05.743851 kernel: audit: type=1327 audit(1696279565.684:682): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363930666332633461653862343364393861313033306561383538 Oct 2 20:46:05.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363930666332633461653862343364393861313033306561383538 Oct 2 20:46:05.773542 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.813590 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.813785 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.854681 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.854819 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.875856 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.690000 audit: BPF prog-id=81 op=LOAD Oct 2 20:46:05.690000 audit[1790]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014d9d8 a2=78 a3=c000273c70 items=0 ppid=1617 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:05.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363930666332633461653862343364393861313033306561383538 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.918892 kernel: audit: type=1400 audit(1696279565.690:683): avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.710000 audit: BPF prog-id=82 op=LOAD Oct 2 20:46:05.710000 audit[1790]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c00014d770 a2=78 a3=c0003885f8 items=0 ppid=1617 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:05.710000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363930666332633461653862343364393861313033306561383538 Oct 2 20:46:05.792000 audit: BPF prog-id=82 op=UNLOAD Oct 2 20:46:05.792000 audit: BPF prog-id=81 op=UNLOAD Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { perfmon } for pid=1790 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit[1790]: AVC avc: denied { bpf } for pid=1790 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:46:05.792000 audit: BPF prog-id=83 op=LOAD Oct 2 20:46:05.792000 audit[1790]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c00014dc30 a2=78 a3=c000388688 items=0 ppid=1617 pid=1790 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:05.792000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3031363930666332633461653862343364393861313033306561383538 Oct 2 20:46:05.928045 env[1130]: time="2023-10-02T20:46:05.927987557Z" level=info msg="StartContainer for \"01690fc2c4ae8b43d98a1030ea85883a58ade5b4dcee7d44288fb8e235e309ab\" returns successfully" Oct 2 20:46:05.991855 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 20:46:05.992035 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 20:46:05.992088 kernel: IPVS: ipvs loaded. Oct 2 20:46:06.009762 kernel: IPVS: [rr] scheduler registered. Oct 2 20:46:06.022759 kernel: IPVS: [wrr] scheduler registered. Oct 2 20:46:06.034762 kernel: IPVS: [sh] scheduler registered. Oct 2 20:46:06.085000 audit[1848]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1848 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.085000 audit[1848]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd86739cc0 a2=0 a3=7ffd86739cac items=0 ppid=1800 pid=1848 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.085000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:46:06.087000 audit[1849]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1849 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.087000 audit[1849]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc67dbd5b0 a2=0 a3=7ffc67dbd59c items=0 ppid=1800 pid=1849 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.087000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:46:06.090000 audit[1850]: NETFILTER_CFG table=mangle:37 family=10 entries=1 op=nft_register_chain pid=1850 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.090000 audit[1850]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe08ea3560 a2=0 a3=7ffe08ea354c items=0 ppid=1800 pid=1850 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.090000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 20:46:06.092000 audit[1851]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_chain pid=1851 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.092000 audit[1851]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc6a5a0d70 a2=0 a3=7ffc6a5a0d5c items=0 ppid=1800 pid=1851 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:46:06.095000 audit[1852]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1852 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.095000 audit[1852]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe933da9e0 a2=0 a3=7ffe933da9cc items=0 ppid=1800 pid=1852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.095000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 20:46:06.097000 audit[1853]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1853 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.097000 audit[1853]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffe85a5ba60 a2=0 a3=7ffe85a5ba4c items=0 ppid=1800 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.097000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 20:46:06.193000 audit[1854]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1854 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.193000 audit[1854]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcc0f9f560 a2=0 a3=7ffcc0f9f54c items=0 ppid=1800 pid=1854 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.193000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:46:06.197000 audit[1856]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1856 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.197000 audit[1856]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffed6b880a0 a2=0 a3=7ffed6b8808c items=0 ppid=1800 pid=1856 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 20:46:06.203000 audit[1859]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.203000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffc8fed1690 a2=0 a3=7ffc8fed167c items=0 ppid=1800 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.203000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 20:46:06.205000 audit[1860]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1860 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.205000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff48bef3d0 a2=0 a3=7fff48bef3bc items=0 ppid=1800 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.205000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:46:06.209000 audit[1862]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.209000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffb97dbd30 a2=0 a3=7fffb97dbd1c items=0 ppid=1800 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.209000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:46:06.210000 audit[1863]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.210000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe9ebbcdb0 a2=0 a3=7ffe9ebbcd9c items=0 ppid=1800 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.210000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:46:06.214000 audit[1865]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.214000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffc1d544560 a2=0 a3=7ffc1d54454c items=0 ppid=1800 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:46:06.219000 audit[1868]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1868 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.219000 audit[1868]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffd5d1251d0 a2=0 a3=7ffd5d1251bc items=0 ppid=1800 pid=1868 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.219000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 20:46:06.220000 audit[1869]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1869 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.220000 audit[1869]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fff4aec4a30 a2=0 a3=7fff4aec4a1c items=0 ppid=1800 pid=1869 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:46:06.224000 audit[1871]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.224000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffe550a2970 a2=0 a3=7ffe550a295c items=0 ppid=1800 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.224000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:46:06.226000 audit[1872]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1872 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.226000 audit[1872]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff90c21d80 a2=0 a3=7fff90c21d6c items=0 ppid=1800 pid=1872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:46:06.230000 audit[1874]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.230000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff41c91dd0 a2=0 a3=7fff41c91dbc items=0 ppid=1800 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.230000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:46:06.236000 audit[1877]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1877 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.236000 audit[1877]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff45ea6e40 a2=0 a3=7fff45ea6e2c items=0 ppid=1800 pid=1877 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.236000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:46:06.241000 audit[1880]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.241000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffc93816ef0 a2=0 a3=7ffc93816edc items=0 ppid=1800 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:46:06.242000 audit[1881]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1881 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.242000 audit[1881]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffed824f1c0 a2=0 a3=7ffed824f1ac items=0 ppid=1800 pid=1881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.242000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:46:06.245000 audit[1883]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.245000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffecb241760 a2=0 a3=7ffecb24174c items=0 ppid=1800 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.245000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:46:06.250000 audit[1886]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1886 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 20:46:06.250000 audit[1886]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff61f1af10 a2=0 a3=7fff61f1aefc items=0 ppid=1800 pid=1886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.250000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:46:06.264000 audit[1890]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:46:06.264000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fffcd960e90 a2=0 a3=7fffcd960e7c items=0 ppid=1800 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.264000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:46:06.276000 audit[1890]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1890 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 20:46:06.276000 audit[1890]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fffcd960e90 a2=0 a3=7fffcd960e7c items=0 ppid=1800 pid=1890 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.276000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:46:06.284000 audit[1894]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.284000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7ffcf7d8dfc0 a2=0 a3=7ffcf7d8dfac items=0 ppid=1800 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.284000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 20:46:06.288000 audit[1896]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1896 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.288000 audit[1896]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffdb8792090 a2=0 a3=7ffdb879207c items=0 ppid=1800 pid=1896 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.288000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 20:46:06.293000 audit[1899]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1899 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.293000 audit[1899]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fffd57a2c10 a2=0 a3=7fffd57a2bfc items=0 ppid=1800 pid=1899 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.293000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 20:46:06.294000 audit[1900]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1900 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.294000 audit[1900]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc3549040 a2=0 a3=7ffdc354902c items=0 ppid=1800 pid=1900 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.294000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 20:46:06.298000 audit[1902]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1902 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.298000 audit[1902]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffda17c2750 a2=0 a3=7ffda17c273c items=0 ppid=1800 pid=1902 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.298000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 20:46:06.299000 audit[1903]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1903 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.299000 audit[1903]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffeb9618630 a2=0 a3=7ffeb961861c items=0 ppid=1800 pid=1903 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.299000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 20:46:06.303000 audit[1905]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.303000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff62d2f960 a2=0 a3=7fff62d2f94c items=0 ppid=1800 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.303000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 20:46:06.308000 audit[1908]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1908 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.308000 audit[1908]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffef73a78f0 a2=0 a3=7ffef73a78dc items=0 ppid=1800 pid=1908 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.308000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 20:46:06.310000 audit[1909]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1909 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.310000 audit[1909]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdf3479ce0 a2=0 a3=7ffdf3479ccc items=0 ppid=1800 pid=1909 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.310000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 20:46:06.314000 audit[1911]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.314000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fffdbb314d0 a2=0 a3=7fffdbb314bc items=0 ppid=1800 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 20:46:06.316000 audit[1912]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1912 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.316000 audit[1912]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc4b20d130 a2=0 a3=7ffc4b20d11c items=0 ppid=1800 pid=1912 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.316000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 20:46:06.320000 audit[1914]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.320000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffa4157d80 a2=0 a3=7fffa4157d6c items=0 ppid=1800 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.320000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 20:46:06.325000 audit[1917]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1917 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.325000 audit[1917]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffdc1582c40 a2=0 a3=7ffdc1582c2c items=0 ppid=1800 pid=1917 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.325000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 20:46:06.330000 audit[1920]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.330000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fff29ad0d70 a2=0 a3=7fff29ad0d5c items=0 ppid=1800 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 20:46:06.331000 audit[1921]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1921 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.331000 audit[1921]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffe07bf07e0 a2=0 a3=7ffe07bf07cc items=0 ppid=1800 pid=1921 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.331000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 20:46:06.335000 audit[1923]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.335000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe1b883020 a2=0 a3=7ffe1b88300c items=0 ppid=1800 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.335000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:46:06.339000 audit[1926]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1926 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 20:46:06.339000 audit[1926]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff776f9b50 a2=0 a3=7fff776f9b3c items=0 ppid=1800 pid=1926 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.339000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 20:46:06.344878 kubelet[1523]: E1002 20:46:06.344823 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:06.347000 audit[1930]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:46:06.347000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7fffdc8122f0 a2=0 a3=7fffdc8122dc items=0 ppid=1800 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.347000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:46:06.348000 audit[1930]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1930 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 20:46:06.348000 audit[1930]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7fffdc8122f0 a2=0 a3=7fffdc8122dc items=0 ppid=1800 pid=1930 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:46:06.348000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 20:46:06.844899 kubelet[1523]: W1002 20:46:06.844749 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice/cri-containerd-390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0.scope WatchSource:0}: task 390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0 not found: not found Oct 2 20:46:07.345504 kubelet[1523]: E1002 20:46:07.345431 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:07.427747 kubelet[1523]: E1002 20:46:07.427688 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:08.346304 kubelet[1523]: E1002 20:46:08.346237 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:09.346544 kubelet[1523]: E1002 20:46:09.346472 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:10.347639 kubelet[1523]: E1002 20:46:10.347566 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:11.348159 kubelet[1523]: E1002 20:46:11.348093 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:12.320189 kubelet[1523]: E1002 20:46:12.320115 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:12.348572 kubelet[1523]: E1002 20:46:12.348510 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:12.429343 kubelet[1523]: E1002 20:46:12.429304 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:13.348994 kubelet[1523]: E1002 20:46:13.348922 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:14.349346 kubelet[1523]: E1002 20:46:14.349271 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:15.350376 kubelet[1523]: E1002 20:46:15.350300 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:16.351498 kubelet[1523]: E1002 20:46:16.351424 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:17.352358 kubelet[1523]: E1002 20:46:17.352282 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:17.430466 kubelet[1523]: E1002 20:46:17.430419 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:18.352994 kubelet[1523]: E1002 20:46:18.352919 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:18.603378 env[1130]: time="2023-10-02T20:46:18.602940377Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:46:18.621110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3182618765.mount: Deactivated successfully. Oct 2 20:46:18.629383 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount878013577.mount: Deactivated successfully. Oct 2 20:46:18.633199 env[1130]: time="2023-10-02T20:46:18.633129087Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" Oct 2 20:46:18.634308 env[1130]: time="2023-10-02T20:46:18.634266383Z" level=info msg="StartContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" Oct 2 20:46:18.663411 systemd[1]: Started cri-containerd-b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683.scope. Oct 2 20:46:18.678143 systemd[1]: cri-containerd-b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683.scope: Deactivated successfully. Oct 2 20:46:18.709196 env[1130]: time="2023-10-02T20:46:18.709085368Z" level=info msg="shim disconnected" id=b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683 Oct 2 20:46:18.709196 env[1130]: time="2023-10-02T20:46:18.709176432Z" level=warning msg="cleaning up after shim disconnected" id=b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683 namespace=k8s.io Oct 2 20:46:18.709196 env[1130]: time="2023-10-02T20:46:18.709193523Z" level=info msg="cleaning up dead shim" Oct 2 20:46:18.721321 env[1130]: time="2023-10-02T20:46:18.721230911Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:46:18Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1957 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:46:18Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:46:18.721676 env[1130]: time="2023-10-02T20:46:18.721595135Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:46:18.725849 env[1130]: time="2023-10-02T20:46:18.725780982Z" level=error msg="Failed to pipe stdout of container \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" error="reading from a closed fifo" Oct 2 20:46:18.726895 env[1130]: time="2023-10-02T20:46:18.726838326Z" level=error msg="Failed to pipe stderr of container \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" error="reading from a closed fifo" Oct 2 20:46:18.729467 env[1130]: time="2023-10-02T20:46:18.729401137Z" level=error msg="StartContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:46:18.729816 kubelet[1523]: E1002 20:46:18.729787 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683" Oct 2 20:46:18.730005 kubelet[1523]: E1002 20:46:18.729936 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:46:18.730005 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:46:18.730005 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:46:18.730005 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2xrqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:46:18.730307 kubelet[1523]: E1002 20:46:18.729994 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:18.811487 kubelet[1523]: I1002 20:46:18.811439 1523 scope.go:115] "RemoveContainer" containerID="390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0" Oct 2 20:46:18.812218 kubelet[1523]: I1002 20:46:18.812188 1523 scope.go:115] "RemoveContainer" containerID="390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0" Oct 2 20:46:18.813998 env[1130]: time="2023-10-02T20:46:18.813938972Z" level=info msg="RemoveContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" Oct 2 20:46:18.814193 env[1130]: time="2023-10-02T20:46:18.814162051Z" level=info msg="RemoveContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\"" Oct 2 20:46:18.814420 env[1130]: time="2023-10-02T20:46:18.814375348Z" level=error msg="RemoveContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\" failed" error="failed to set removing state for container \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\": container is already in removing state" Oct 2 20:46:18.814752 kubelet[1523]: E1002 20:46:18.814699 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\": container is already in removing state" containerID="390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0" Oct 2 20:46:18.814885 kubelet[1523]: E1002 20:46:18.814769 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0": container is already in removing state; Skipping pod "cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)" Oct 2 20:46:18.815247 kubelet[1523]: E1002 20:46:18.815206 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:18.819050 env[1130]: time="2023-10-02T20:46:18.819006128Z" level=info msg="RemoveContainer for \"390db40aa0c9164741576c672c6db0f8efeadd7be40c4c74a5d708b396ef71c0\" returns successfully" Oct 2 20:46:19.353662 kubelet[1523]: E1002 20:46:19.353589 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:19.616784 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683-rootfs.mount: Deactivated successfully. Oct 2 20:46:20.353898 kubelet[1523]: E1002 20:46:20.353840 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:21.354339 kubelet[1523]: E1002 20:46:21.354268 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:21.813962 kubelet[1523]: W1002 20:46:21.813892 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice/cri-containerd-b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683.scope WatchSource:0}: task b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683 not found: not found Oct 2 20:46:22.355167 kubelet[1523]: E1002 20:46:22.355092 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:22.431736 kubelet[1523]: E1002 20:46:22.431674 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:23.355699 kubelet[1523]: E1002 20:46:23.355627 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:24.356137 kubelet[1523]: E1002 20:46:24.356065 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:25.356348 kubelet[1523]: E1002 20:46:25.356250 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:26.357484 kubelet[1523]: E1002 20:46:26.357415 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:27.358142 kubelet[1523]: E1002 20:46:27.358066 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:27.432995 kubelet[1523]: E1002 20:46:27.432952 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:28.358749 kubelet[1523]: E1002 20:46:28.358664 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:29.359836 kubelet[1523]: E1002 20:46:29.359762 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:30.360878 kubelet[1523]: E1002 20:46:30.360806 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:31.361765 kubelet[1523]: E1002 20:46:31.361673 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:31.599959 kubelet[1523]: E1002 20:46:31.599904 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:32.319979 kubelet[1523]: E1002 20:46:32.319905 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:32.362738 kubelet[1523]: E1002 20:46:32.362652 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:32.434184 kubelet[1523]: E1002 20:46:32.434150 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:33.363834 kubelet[1523]: E1002 20:46:33.363776 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:34.364381 kubelet[1523]: E1002 20:46:34.364314 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:35.365428 kubelet[1523]: E1002 20:46:35.365353 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:36.365970 kubelet[1523]: E1002 20:46:36.365902 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:37.366231 kubelet[1523]: E1002 20:46:37.366164 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:37.436148 kubelet[1523]: E1002 20:46:37.436109 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:38.367146 kubelet[1523]: E1002 20:46:38.367060 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:39.368289 kubelet[1523]: E1002 20:46:39.368216 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:40.368490 kubelet[1523]: E1002 20:46:40.368416 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:41.369648 kubelet[1523]: E1002 20:46:41.369571 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:42.370423 kubelet[1523]: E1002 20:46:42.370354 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:42.436912 kubelet[1523]: E1002 20:46:42.436849 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:43.371137 kubelet[1523]: E1002 20:46:43.371064 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:44.372042 kubelet[1523]: E1002 20:46:44.371969 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:45.372291 kubelet[1523]: E1002 20:46:45.372212 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:45.603412 env[1130]: time="2023-10-02T20:46:45.603349443Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:46:45.621676 env[1130]: time="2023-10-02T20:46:45.621599583Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" Oct 2 20:46:45.622627 env[1130]: time="2023-10-02T20:46:45.622294966Z" level=info msg="StartContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" Oct 2 20:46:45.658471 systemd[1]: Started cri-containerd-22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6.scope. Oct 2 20:46:45.670169 systemd[1]: cri-containerd-22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6.scope: Deactivated successfully. Oct 2 20:46:45.676534 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6-rootfs.mount: Deactivated successfully. Oct 2 20:46:45.686563 env[1130]: time="2023-10-02T20:46:45.686469689Z" level=info msg="shim disconnected" id=22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6 Oct 2 20:46:45.686563 env[1130]: time="2023-10-02T20:46:45.686556784Z" level=warning msg="cleaning up after shim disconnected" id=22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6 namespace=k8s.io Oct 2 20:46:45.686955 env[1130]: time="2023-10-02T20:46:45.686572532Z" level=info msg="cleaning up dead shim" Oct 2 20:46:45.698411 env[1130]: time="2023-10-02T20:46:45.698349435Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:46:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2001 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:46:45Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:46:45.698817 env[1130]: time="2023-10-02T20:46:45.698712706Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:46:45.699893 env[1130]: time="2023-10-02T20:46:45.699822940Z" level=error msg="Failed to pipe stdout of container \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" error="reading from a closed fifo" Oct 2 20:46:45.700155 env[1130]: time="2023-10-02T20:46:45.700085223Z" level=error msg="Failed to pipe stderr of container \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" error="reading from a closed fifo" Oct 2 20:46:45.703323 env[1130]: time="2023-10-02T20:46:45.703261304Z" level=error msg="StartContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:46:45.703778 kubelet[1523]: E1002 20:46:45.703699 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6" Oct 2 20:46:45.704563 kubelet[1523]: E1002 20:46:45.704065 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:46:45.704563 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:46:45.704563 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:46:45.704563 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2xrqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:46:45.705020 kubelet[1523]: E1002 20:46:45.704131 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:45.862218 kubelet[1523]: I1002 20:46:45.862178 1523 scope.go:115] "RemoveContainer" containerID="b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683" Oct 2 20:46:45.862662 kubelet[1523]: I1002 20:46:45.862638 1523 scope.go:115] "RemoveContainer" containerID="b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683" Oct 2 20:46:45.864404 env[1130]: time="2023-10-02T20:46:45.864359191Z" level=info msg="RemoveContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" Oct 2 20:46:45.865424 env[1130]: time="2023-10-02T20:46:45.865381615Z" level=info msg="RemoveContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\"" Oct 2 20:46:45.865603 env[1130]: time="2023-10-02T20:46:45.865506479Z" level=error msg="RemoveContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\" failed" error="failed to set removing state for container \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\": container is already in removing state" Oct 2 20:46:45.865898 kubelet[1523]: E1002 20:46:45.865875 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\": container is already in removing state" containerID="b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683" Oct 2 20:46:45.866172 kubelet[1523]: E1002 20:46:45.866140 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683": container is already in removing state; Skipping pod "cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)" Oct 2 20:46:45.866841 kubelet[1523]: E1002 20:46:45.866819 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:46:45.868907 env[1130]: time="2023-10-02T20:46:45.868862118Z" level=info msg="RemoveContainer for \"b40b1250e950b9460fec59dbae5b814658f243d3dc71bb96b83e6e62dbd87683\" returns successfully" Oct 2 20:46:46.372519 kubelet[1523]: E1002 20:46:46.372447 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:47.373417 kubelet[1523]: E1002 20:46:47.373357 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:47.438749 kubelet[1523]: E1002 20:46:47.438653 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:48.374601 kubelet[1523]: E1002 20:46:48.374528 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:48.792953 kubelet[1523]: W1002 20:46:48.792885 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice/cri-containerd-22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6.scope WatchSource:0}: task 22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6 not found: not found Oct 2 20:46:49.375161 kubelet[1523]: E1002 20:46:49.375076 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:50.375538 kubelet[1523]: E1002 20:46:50.375457 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:51.376708 kubelet[1523]: E1002 20:46:51.376615 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:52.320499 kubelet[1523]: E1002 20:46:52.320424 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:52.377187 kubelet[1523]: E1002 20:46:52.377120 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:52.440106 kubelet[1523]: E1002 20:46:52.440065 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:53.378260 kubelet[1523]: E1002 20:46:53.378190 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:54.379361 kubelet[1523]: E1002 20:46:54.379289 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:55.379991 kubelet[1523]: E1002 20:46:55.379925 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:56.380473 kubelet[1523]: E1002 20:46:56.380405 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:57.381163 kubelet[1523]: E1002 20:46:57.381083 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:57.441533 kubelet[1523]: E1002 20:46:57.441487 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:46:58.382067 kubelet[1523]: E1002 20:46:58.381983 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:46:59.383067 kubelet[1523]: E1002 20:46:59.382997 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:00.383882 kubelet[1523]: E1002 20:47:00.383812 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:01.384394 kubelet[1523]: E1002 20:47:01.384320 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:01.600692 kubelet[1523]: E1002 20:47:01.600639 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:02.385059 kubelet[1523]: E1002 20:47:02.384983 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:02.442529 kubelet[1523]: E1002 20:47:02.442484 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:03.386099 kubelet[1523]: E1002 20:47:03.386028 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:04.386685 kubelet[1523]: E1002 20:47:04.386613 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:05.387779 kubelet[1523]: E1002 20:47:05.387710 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:06.388706 kubelet[1523]: E1002 20:47:06.388638 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:07.389583 kubelet[1523]: E1002 20:47:07.389520 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:07.443774 kubelet[1523]: E1002 20:47:07.443710 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:08.390089 kubelet[1523]: E1002 20:47:08.390029 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:09.390879 kubelet[1523]: E1002 20:47:09.390815 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:10.391597 kubelet[1523]: E1002 20:47:10.391526 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:11.391835 kubelet[1523]: E1002 20:47:11.391770 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:12.320018 kubelet[1523]: E1002 20:47:12.319951 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:12.392673 kubelet[1523]: E1002 20:47:12.392607 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:12.444748 kubelet[1523]: E1002 20:47:12.444702 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:12.601358 kubelet[1523]: E1002 20:47:12.600979 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:13.393263 kubelet[1523]: E1002 20:47:13.393193 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:14.393929 kubelet[1523]: E1002 20:47:14.393861 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:15.394399 kubelet[1523]: E1002 20:47:15.394332 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:16.395343 kubelet[1523]: E1002 20:47:16.395264 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:17.395884 kubelet[1523]: E1002 20:47:17.395806 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:17.445852 kubelet[1523]: E1002 20:47:17.445802 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:18.396366 kubelet[1523]: E1002 20:47:18.396292 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:19.397266 kubelet[1523]: E1002 20:47:19.397191 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:20.397984 kubelet[1523]: E1002 20:47:20.397915 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:21.399070 kubelet[1523]: E1002 20:47:21.398994 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:22.400297 kubelet[1523]: E1002 20:47:22.400229 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:22.447126 kubelet[1523]: E1002 20:47:22.447091 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:23.401226 kubelet[1523]: E1002 20:47:23.401153 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:23.600128 kubelet[1523]: E1002 20:47:23.600056 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:24.401579 kubelet[1523]: E1002 20:47:24.401509 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:25.401866 kubelet[1523]: E1002 20:47:25.401789 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:26.402838 kubelet[1523]: E1002 20:47:26.402768 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:27.403664 kubelet[1523]: E1002 20:47:27.403596 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:27.448745 kubelet[1523]: E1002 20:47:27.448697 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:28.404747 kubelet[1523]: E1002 20:47:28.404657 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:29.404903 kubelet[1523]: E1002 20:47:29.404837 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:30.405260 kubelet[1523]: E1002 20:47:30.405189 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:31.406411 kubelet[1523]: E1002 20:47:31.406348 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:32.319919 kubelet[1523]: E1002 20:47:32.319849 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:32.406880 kubelet[1523]: E1002 20:47:32.406813 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:32.449767 kubelet[1523]: E1002 20:47:32.449737 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:33.408048 kubelet[1523]: E1002 20:47:33.407975 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:34.409232 kubelet[1523]: E1002 20:47:34.409156 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:35.409740 kubelet[1523]: E1002 20:47:35.409653 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:36.410515 kubelet[1523]: E1002 20:47:36.410442 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:37.410980 kubelet[1523]: E1002 20:47:37.410928 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:37.450910 kubelet[1523]: E1002 20:47:37.450871 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:37.603327 env[1130]: time="2023-10-02T20:47:37.603252343Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 20:47:37.620804 env[1130]: time="2023-10-02T20:47:37.620738619Z" level=info msg="CreateContainer within sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\"" Oct 2 20:47:37.621661 env[1130]: time="2023-10-02T20:47:37.621587504Z" level=info msg="StartContainer for \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\"" Oct 2 20:47:37.649360 systemd[1]: Started cri-containerd-e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda.scope. Oct 2 20:47:37.669712 systemd[1]: cri-containerd-e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda.scope: Deactivated successfully. Oct 2 20:47:37.676194 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda-rootfs.mount: Deactivated successfully. Oct 2 20:47:37.685343 env[1130]: time="2023-10-02T20:47:37.685258393Z" level=info msg="shim disconnected" id=e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda Oct 2 20:47:37.685343 env[1130]: time="2023-10-02T20:47:37.685338109Z" level=warning msg="cleaning up after shim disconnected" id=e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda namespace=k8s.io Oct 2 20:47:37.685343 env[1130]: time="2023-10-02T20:47:37.685353851Z" level=info msg="cleaning up dead shim" Oct 2 20:47:37.697109 env[1130]: time="2023-10-02T20:47:37.697037787Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:47:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2047 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:47:37Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:47:37.697451 env[1130]: time="2023-10-02T20:47:37.697383220Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 20:47:37.700850 env[1130]: time="2023-10-02T20:47:37.700787789Z" level=error msg="Failed to pipe stdout of container \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\"" error="reading from a closed fifo" Oct 2 20:47:37.700964 env[1130]: time="2023-10-02T20:47:37.700891604Z" level=error msg="Failed to pipe stderr of container \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\"" error="reading from a closed fifo" Oct 2 20:47:37.703193 env[1130]: time="2023-10-02T20:47:37.703133088Z" level=error msg="StartContainer for \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:47:37.703434 kubelet[1523]: E1002 20:47:37.703385 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda" Oct 2 20:47:37.703668 kubelet[1523]: E1002 20:47:37.703531 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:47:37.703668 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:47:37.703668 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:47:37.703668 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-2xrqr,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:47:37.704062 kubelet[1523]: E1002 20:47:37.703588 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:37.958947 kubelet[1523]: I1002 20:47:37.958807 1523 scope.go:115] "RemoveContainer" containerID="22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6" Oct 2 20:47:37.960167 kubelet[1523]: I1002 20:47:37.960127 1523 scope.go:115] "RemoveContainer" containerID="22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6" Oct 2 20:47:37.961397 env[1130]: time="2023-10-02T20:47:37.961352746Z" level=info msg="RemoveContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" Oct 2 20:47:37.962132 env[1130]: time="2023-10-02T20:47:37.962097952Z" level=info msg="RemoveContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\"" Oct 2 20:47:37.962463 env[1130]: time="2023-10-02T20:47:37.962412142Z" level=error msg="RemoveContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\" failed" error="failed to set removing state for container \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\": container is already in removing state" Oct 2 20:47:37.962620 kubelet[1523]: E1002 20:47:37.962597 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\": container is already in removing state" containerID="22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6" Oct 2 20:47:37.962754 kubelet[1523]: E1002 20:47:37.962642 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6": container is already in removing state; Skipping pod "cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)" Oct 2 20:47:37.963128 kubelet[1523]: E1002 20:47:37.963084 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:37.965277 env[1130]: time="2023-10-02T20:47:37.965240456Z" level=info msg="RemoveContainer for \"22480d50c2a87a5459ee8d4f7c688293f5e12fb6ef8ea4a6e8da21cb0476f0c6\" returns successfully" Oct 2 20:47:38.411903 kubelet[1523]: E1002 20:47:38.411826 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:39.412091 kubelet[1523]: E1002 20:47:39.412018 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:40.412697 kubelet[1523]: E1002 20:47:40.412631 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:40.791308 kubelet[1523]: W1002 20:47:40.791211 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice/cri-containerd-e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda.scope WatchSource:0}: task e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda not found: not found Oct 2 20:47:41.413125 kubelet[1523]: E1002 20:47:41.413050 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:42.413541 kubelet[1523]: E1002 20:47:42.413460 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:42.452164 kubelet[1523]: E1002 20:47:42.452094 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:43.413681 kubelet[1523]: E1002 20:47:43.413596 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:44.414531 kubelet[1523]: E1002 20:47:44.414462 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:45.414836 kubelet[1523]: E1002 20:47:45.414771 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:46.415490 kubelet[1523]: E1002 20:47:46.415417 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:47.416048 kubelet[1523]: E1002 20:47:47.415987 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:47.453999 kubelet[1523]: E1002 20:47:47.453940 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:48.416810 kubelet[1523]: E1002 20:47:48.416744 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:49.417472 kubelet[1523]: E1002 20:47:49.417389 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:50.417764 kubelet[1523]: E1002 20:47:50.417694 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:50.600761 kubelet[1523]: E1002 20:47:50.600688 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:47:51.418452 kubelet[1523]: E1002 20:47:51.418379 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:52.320484 kubelet[1523]: E1002 20:47:52.320412 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:52.419176 kubelet[1523]: E1002 20:47:52.419108 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:52.455520 kubelet[1523]: E1002 20:47:52.455483 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:53.419756 kubelet[1523]: E1002 20:47:53.419674 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:54.420817 kubelet[1523]: E1002 20:47:54.420745 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:55.421321 kubelet[1523]: E1002 20:47:55.421242 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:56.422374 kubelet[1523]: E1002 20:47:56.422299 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:57.423021 kubelet[1523]: E1002 20:47:57.422950 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:57.457083 kubelet[1523]: E1002 20:47:57.457033 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:47:58.424002 kubelet[1523]: E1002 20:47:58.423932 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:47:59.425052 kubelet[1523]: E1002 20:47:59.424984 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:00.425299 kubelet[1523]: E1002 20:48:00.425228 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:01.425754 kubelet[1523]: E1002 20:48:01.425654 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:02.426453 kubelet[1523]: E1002 20:48:02.426388 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:02.458543 kubelet[1523]: E1002 20:48:02.458502 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:02.601032 kubelet[1523]: E1002 20:48:02.600739 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:48:03.427317 kubelet[1523]: E1002 20:48:03.427247 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:04.428502 kubelet[1523]: E1002 20:48:04.428434 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:05.428699 kubelet[1523]: E1002 20:48:05.428624 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:06.429634 kubelet[1523]: E1002 20:48:06.429560 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:07.430433 kubelet[1523]: E1002 20:48:07.430360 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:07.459349 kubelet[1523]: E1002 20:48:07.459308 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:08.431594 kubelet[1523]: E1002 20:48:08.431518 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:09.431797 kubelet[1523]: E1002 20:48:09.431709 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:10.432280 kubelet[1523]: E1002 20:48:10.432213 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:11.433234 kubelet[1523]: E1002 20:48:11.433174 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:12.320080 kubelet[1523]: E1002 20:48:12.320011 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:12.433415 kubelet[1523]: E1002 20:48:12.433349 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:12.460676 kubelet[1523]: E1002 20:48:12.460623 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:13.433949 kubelet[1523]: E1002 20:48:13.433882 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:14.434999 kubelet[1523]: E1002 20:48:14.434923 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:15.435693 kubelet[1523]: E1002 20:48:15.435616 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:15.600363 kubelet[1523]: E1002 20:48:15.600306 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:48:16.436658 kubelet[1523]: E1002 20:48:16.436585 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:17.437551 kubelet[1523]: E1002 20:48:17.437477 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:17.462445 kubelet[1523]: E1002 20:48:17.462389 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:18.438120 kubelet[1523]: E1002 20:48:18.438064 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:19.438558 kubelet[1523]: E1002 20:48:19.438479 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:20.439277 kubelet[1523]: E1002 20:48:20.439199 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:21.440335 kubelet[1523]: E1002 20:48:21.440265 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:22.440858 kubelet[1523]: E1002 20:48:22.440794 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:22.463393 kubelet[1523]: E1002 20:48:22.463340 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:23.441040 kubelet[1523]: E1002 20:48:23.440974 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:24.441747 kubelet[1523]: E1002 20:48:24.441670 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:25.442655 kubelet[1523]: E1002 20:48:25.442594 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:26.443523 kubelet[1523]: E1002 20:48:26.443451 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:27.443923 kubelet[1523]: E1002 20:48:27.443848 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:27.464767 kubelet[1523]: E1002 20:48:27.464701 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:28.444512 kubelet[1523]: E1002 20:48:28.444439 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:29.445186 kubelet[1523]: E1002 20:48:29.445114 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:30.445898 kubelet[1523]: E1002 20:48:30.445831 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:30.600826 kubelet[1523]: E1002 20:48:30.600778 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:48:31.446453 kubelet[1523]: E1002 20:48:31.446385 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:32.320276 kubelet[1523]: E1002 20:48:32.320212 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:32.447461 kubelet[1523]: E1002 20:48:32.447403 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:32.465928 kubelet[1523]: E1002 20:48:32.465876 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:33.448560 kubelet[1523]: E1002 20:48:33.448487 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:34.448771 kubelet[1523]: E1002 20:48:34.448704 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:35.448975 kubelet[1523]: E1002 20:48:35.448907 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:36.450157 kubelet[1523]: E1002 20:48:36.450074 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:37.450401 kubelet[1523]: E1002 20:48:37.450315 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:37.467323 kubelet[1523]: E1002 20:48:37.467285 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:38.451668 kubelet[1523]: E1002 20:48:38.451587 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:39.452125 kubelet[1523]: E1002 20:48:39.452057 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:40.453237 kubelet[1523]: E1002 20:48:40.453168 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:41.453712 kubelet[1523]: E1002 20:48:41.453631 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:41.600561 kubelet[1523]: E1002 20:48:41.600504 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:48:42.453771 kubelet[1523]: E1002 20:48:42.453719 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:42.468213 kubelet[1523]: E1002 20:48:42.468178 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:43.454051 kubelet[1523]: E1002 20:48:43.453983 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:44.454420 kubelet[1523]: E1002 20:48:44.454334 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:45.455171 kubelet[1523]: E1002 20:48:45.455100 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:46.455747 kubelet[1523]: E1002 20:48:46.455664 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:47.456565 kubelet[1523]: E1002 20:48:47.456476 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:47.469454 kubelet[1523]: E1002 20:48:47.469424 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:48.457699 kubelet[1523]: E1002 20:48:48.457629 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:49.458567 kubelet[1523]: E1002 20:48:49.458496 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:50.459156 kubelet[1523]: E1002 20:48:50.459084 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:51.459561 kubelet[1523]: E1002 20:48:51.459488 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:52.320039 kubelet[1523]: E1002 20:48:52.319972 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:52.459720 kubelet[1523]: E1002 20:48:52.459652 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:52.470602 kubelet[1523]: E1002 20:48:52.470566 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:53.460813 kubelet[1523]: E1002 20:48:53.460743 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:53.600307 kubelet[1523]: E1002 20:48:53.600240 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-dmwds_kube-system(2b5b6383-5461-4be0-9516-72cbade21985)\"" pod="kube-system/cilium-dmwds" podUID=2b5b6383-5461-4be0-9516-72cbade21985 Oct 2 20:48:54.461550 kubelet[1523]: E1002 20:48:54.461462 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:55.461987 kubelet[1523]: E1002 20:48:55.461906 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:56.462944 kubelet[1523]: E1002 20:48:56.462874 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:57.463901 kubelet[1523]: E1002 20:48:57.463827 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:57.471746 kubelet[1523]: E1002 20:48:57.471700 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:48:58.464359 kubelet[1523]: E1002 20:48:58.464286 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:48:59.465239 kubelet[1523]: E1002 20:48:59.465173 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:00.466302 kubelet[1523]: E1002 20:49:00.466216 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:01.467007 kubelet[1523]: E1002 20:49:01.466937 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:02.033920 env[1130]: time="2023-10-02T20:49:02.033855970Z" level=info msg="StopPodSandbox for \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\"" Oct 2 20:49:02.034596 env[1130]: time="2023-10-02T20:49:02.033952045Z" level=info msg="Container to stop \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:49:02.036357 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5-shm.mount: Deactivated successfully. Oct 2 20:49:02.048716 systemd[1]: cri-containerd-10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5.scope: Deactivated successfully. Oct 2 20:49:02.062045 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 20:49:02.062206 kernel: audit: type=1334 audit(1696279742.047:732): prog-id=77 op=UNLOAD Oct 2 20:49:02.047000 audit: BPF prog-id=77 op=UNLOAD Oct 2 20:49:02.062000 audit: BPF prog-id=80 op=UNLOAD Oct 2 20:49:02.071788 kernel: audit: type=1334 audit(1696279742.062:733): prog-id=80 op=UNLOAD Oct 2 20:49:02.078620 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5-rootfs.mount: Deactivated successfully. Oct 2 20:49:02.089015 env[1130]: time="2023-10-02T20:49:02.088948970Z" level=info msg="shim disconnected" id=10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5 Oct 2 20:49:02.089280 env[1130]: time="2023-10-02T20:49:02.089019144Z" level=warning msg="cleaning up after shim disconnected" id=10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5 namespace=k8s.io Oct 2 20:49:02.089280 env[1130]: time="2023-10-02T20:49:02.089034759Z" level=info msg="cleaning up dead shim" Oct 2 20:49:02.102494 env[1130]: time="2023-10-02T20:49:02.102429682Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2091 runtime=io.containerd.runc.v2\n" Oct 2 20:49:02.102967 env[1130]: time="2023-10-02T20:49:02.102926178Z" level=info msg="TearDown network for sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" successfully" Oct 2 20:49:02.103122 env[1130]: time="2023-10-02T20:49:02.102963784Z" level=info msg="StopPodSandbox for \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" returns successfully" Oct 2 20:49:02.116924 kubelet[1523]: I1002 20:49:02.116406 1523 scope.go:115] "RemoveContainer" containerID="e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda" Oct 2 20:49:02.117617 env[1130]: time="2023-10-02T20:49:02.117555806Z" level=info msg="RemoveContainer for \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\"" Oct 2 20:49:02.121758 env[1130]: time="2023-10-02T20:49:02.121677760Z" level=info msg="RemoveContainer for \"e1713d845bc93cfc13a28047be6e45c3706a8a259fe1da20b938b3c228374dda\" returns successfully" Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289043 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289048 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-run\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289138 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-lib-modules\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289168 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-etc-cni-netd\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289197 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cni-path\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290115 kubelet[1523]: I1002 20:49:02.289227 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-cgroup\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289261 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b5b6383-5461-4be0-9516-72cbade21985-cilium-config-path\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289289 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-hostproc\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289333 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-hubble-tls\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289371 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-net\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289401 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-bpf-maps\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.290613 kubelet[1523]: I1002 20:49:02.289441 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b5b6383-5461-4be0-9516-72cbade21985-clustermesh-secrets\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.291000 kubelet[1523]: I1002 20:49:02.289472 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-xtables-lock\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.291000 kubelet[1523]: I1002 20:49:02.289506 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-kernel\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.291000 kubelet[1523]: I1002 20:49:02.289543 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2xrqr\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-kube-api-access-2xrqr\") pod \"2b5b6383-5461-4be0-9516-72cbade21985\" (UID: \"2b5b6383-5461-4be0-9516-72cbade21985\") " Oct 2 20:49:02.291000 kubelet[1523]: I1002 20:49:02.289584 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-run\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.292131 kubelet[1523]: I1002 20:49:02.291290 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-hostproc" (OuterVolumeSpecName: "hostproc") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.292499 kubelet[1523]: I1002 20:49:02.292050 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.292659 kubelet[1523]: I1002 20:49:02.292080 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.292832 kubelet[1523]: I1002 20:49:02.292100 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cni-path" (OuterVolumeSpecName: "cni-path") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.292967 kubelet[1523]: I1002 20:49:02.292341 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.293197 kubelet[1523]: I1002 20:49:02.293174 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.293455 kubelet[1523]: I1002 20:49:02.293430 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.295803 systemd[1]: var-lib-kubelet-pods-2b5b6383\x2d5461\x2d4be0\x2d9516\x2d72cbade21985-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2xrqr.mount: Deactivated successfully. Oct 2 20:49:02.297020 kubelet[1523]: I1002 20:49:02.296983 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.297222 kubelet[1523]: I1002 20:49:02.297188 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:02.297608 kubelet[1523]: W1002 20:49:02.297546 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/2b5b6383-5461-4be0-9516-72cbade21985/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:49:02.300156 kubelet[1523]: I1002 20:49:02.300120 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-kube-api-access-2xrqr" (OuterVolumeSpecName: "kube-api-access-2xrqr") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "kube-api-access-2xrqr". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:49:02.302326 systemd[1]: var-lib-kubelet-pods-2b5b6383\x2d5461\x2d4be0\x2d9516\x2d72cbade21985-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:49:02.303814 kubelet[1523]: I1002 20:49:02.303753 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:49:02.304831 kubelet[1523]: I1002 20:49:02.304788 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2b5b6383-5461-4be0-9516-72cbade21985-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:49:02.308540 systemd[1]: var-lib-kubelet-pods-2b5b6383\x2d5461\x2d4be0\x2d9516\x2d72cbade21985-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:49:02.309933 kubelet[1523]: I1002 20:49:02.309880 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2b5b6383-5461-4be0-9516-72cbade21985-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "2b5b6383-5461-4be0-9516-72cbade21985" (UID: "2b5b6383-5461-4be0-9516-72cbade21985"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:49:02.390429 kubelet[1523]: I1002 20:49:02.390377 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-etc-cni-netd\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390429 kubelet[1523]: I1002 20:49:02.390425 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-lib-modules\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390429 kubelet[1523]: I1002 20:49:02.390446 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-hubble-tls\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390464 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-net\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390480 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cni-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390494 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-cilium-cgroup\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390508 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/2b5b6383-5461-4be0-9516-72cbade21985-cilium-config-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390523 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-hostproc\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390540 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/2b5b6383-5461-4be0-9516-72cbade21985-clustermesh-secrets\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390557 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-xtables-lock\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.390797 kubelet[1523]: I1002 20:49:02.390571 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-bpf-maps\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.391114 kubelet[1523]: I1002 20:49:02.390586 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-2xrqr\" (UniqueName: \"kubernetes.io/projected/2b5b6383-5461-4be0-9516-72cbade21985-kube-api-access-2xrqr\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.391114 kubelet[1523]: I1002 20:49:02.390605 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/2b5b6383-5461-4be0-9516-72cbade21985-host-proc-sys-kernel\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:02.422695 systemd[1]: Removed slice kubepods-burstable-pod2b5b6383_5461_4be0_9516_72cbade21985.slice. Oct 2 20:49:02.467194 kubelet[1523]: E1002 20:49:02.467131 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:02.473134 kubelet[1523]: E1002 20:49:02.473102 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:02.474482 kubelet[1523]: I1002 20:49:02.474443 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:49:02.474670 kubelet[1523]: E1002 20:49:02.474523 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: E1002 20:49:02.474538 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: E1002 20:49:02.474548 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: I1002 20:49:02.474575 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: I1002 20:49:02.474585 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: I1002 20:49:02.474594 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: I1002 20:49:02.474604 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: E1002 20:49:02.474628 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: E1002 20:49:02.474638 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.474670 kubelet[1523]: I1002 20:49:02.474658 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="2b5b6383-5461-4be0-9516-72cbade21985" containerName="mount-cgroup" Oct 2 20:49:02.481517 systemd[1]: Created slice kubepods-burstable-podcecf530e_b658_46d6_add3_02cd346fe2a4.slice. Oct 2 20:49:02.490872 kubelet[1523]: I1002 20:49:02.490841 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-run\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.490890 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-hostproc\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.490922 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-etc-cni-netd\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.490951 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-xtables-lock\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.490982 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-bpf-maps\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.491012 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-cgroup\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491063 kubelet[1523]: I1002 20:49:02.491040 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cni-path\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491074 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-lib-modules\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491109 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecf530e-b658-46d6-add3-02cd346fe2a4-clustermesh-secrets\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491148 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjzqx\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-kube-api-access-mjzqx\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491184 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-kernel\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491218 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-hubble-tls\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491402 kubelet[1523]: I1002 20:49:02.491260 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-config-path\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.491715 kubelet[1523]: I1002 20:49:02.491298 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-net\") pod \"cilium-74bdl\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " pod="kube-system/cilium-74bdl" Oct 2 20:49:02.603703 kubelet[1523]: I1002 20:49:02.603257 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=2b5b6383-5461-4be0-9516-72cbade21985 path="/var/lib/kubelet/pods/2b5b6383-5461-4be0-9516-72cbade21985/volumes" Oct 2 20:49:02.788292 env[1130]: time="2023-10-02T20:49:02.788226760Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74bdl,Uid:cecf530e-b658-46d6-add3-02cd346fe2a4,Namespace:kube-system,Attempt:0,}" Oct 2 20:49:02.806433 env[1130]: time="2023-10-02T20:49:02.806344095Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:49:02.806433 env[1130]: time="2023-10-02T20:49:02.806397279Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:49:02.806798 env[1130]: time="2023-10-02T20:49:02.806414940Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:49:02.807225 env[1130]: time="2023-10-02T20:49:02.807162830Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68 pid=2120 runtime=io.containerd.runc.v2 Oct 2 20:49:02.824877 systemd[1]: Started cri-containerd-c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68.scope. Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.884872 kernel: audit: type=1400 audit(1696279742.842:734): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.885101 kernel: audit: type=1400 audit(1696279742.842:735): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.926953 kernel: audit: type=1400 audit(1696279742.842:736): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.927081 kernel: audit: type=1400 audit(1696279742.842:737): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.971865 kernel: audit: type=1400 audit(1696279742.842:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.972056 kernel: audit: type=1400 audit(1696279742.842:739): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.972103 kernel: audit: type=1400 audit(1696279742.842:740): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.974513 env[1130]: time="2023-10-02T20:49:02.974460588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-74bdl,Uid:cecf530e-b658-46d6-add3-02cd346fe2a4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:02.978563 env[1130]: time="2023-10-02T20:49:02.978486025Z" level=info msg="CreateContainer within sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:03.013645 kernel: audit: type=1400 audit(1696279742.842:741): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.842000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit: BPF prog-id=84 op=LOAD Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2120 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:02.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330353763323230633937376239343434353735336635633635653431 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2120 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:02.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330353763323230633937376239343434353735336635633635653431 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.883000 audit: BPF prog-id=85 op=LOAD Oct 2 20:49:02.883000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000335b10 items=0 ppid=2120 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:02.883000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330353763323230633937376239343434353735336635633635653431 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.904000 audit: BPF prog-id=86 op=LOAD Oct 2 20:49:02.904000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000335b58 items=0 ppid=2120 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:02.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330353763323230633937376239343434353735336635633635653431 Oct 2 20:49:02.925000 audit: BPF prog-id=86 op=UNLOAD Oct 2 20:49:02.925000 audit: BPF prog-id=85 op=UNLOAD Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { perfmon } for pid=2130 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit[2130]: AVC avc: denied { bpf } for pid=2130 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:02.925000 audit: BPF prog-id=87 op=LOAD Oct 2 20:49:02.925000 audit[2130]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000335f68 items=0 ppid=2120 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:02.925000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330353763323230633937376239343434353735336635633635653431 Oct 2 20:49:03.027694 env[1130]: time="2023-10-02T20:49:03.027625871Z" level=info msg="CreateContainer within sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\"" Oct 2 20:49:03.028668 env[1130]: time="2023-10-02T20:49:03.028608090Z" level=info msg="StartContainer for \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\"" Oct 2 20:49:03.072575 systemd[1]: Started cri-containerd-e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa.scope. Oct 2 20:49:03.087997 systemd[1]: cri-containerd-e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa.scope: Deactivated successfully. Oct 2 20:49:03.096468 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa-rootfs.mount: Deactivated successfully. Oct 2 20:49:03.104254 env[1130]: time="2023-10-02T20:49:03.104173791Z" level=info msg="shim disconnected" id=e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa Oct 2 20:49:03.104254 env[1130]: time="2023-10-02T20:49:03.104243120Z" level=warning msg="cleaning up after shim disconnected" id=e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa namespace=k8s.io Oct 2 20:49:03.104254 env[1130]: time="2023-10-02T20:49:03.104258656Z" level=info msg="cleaning up dead shim" Oct 2 20:49:03.119386 env[1130]: time="2023-10-02T20:49:03.116215622Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2179 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:03Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:03.119659 env[1130]: time="2023-10-02T20:49:03.116574128Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 20:49:03.119781 env[1130]: time="2023-10-02T20:49:03.119716024Z" level=error msg="Failed to pipe stdout of container \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\"" error="reading from a closed fifo" Oct 2 20:49:03.119922 env[1130]: time="2023-10-02T20:49:03.119878612Z" level=error msg="Failed to pipe stderr of container \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\"" error="reading from a closed fifo" Oct 2 20:49:03.122390 env[1130]: time="2023-10-02T20:49:03.122318834Z" level=error msg="StartContainer for \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:03.122936 kubelet[1523]: E1002 20:49:03.122863 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa" Oct 2 20:49:03.123124 kubelet[1523]: E1002 20:49:03.122981 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:03.123124 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:03.123124 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:03.123124 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mjzqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-74bdl_kube-system(cecf530e-b658-46d6-add3-02cd346fe2a4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:03.123555 kubelet[1523]: E1002 20:49:03.123034 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-74bdl" podUID=cecf530e-b658-46d6-add3-02cd346fe2a4 Oct 2 20:49:03.467478 kubelet[1523]: E1002 20:49:03.467387 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:04.130970 env[1130]: time="2023-10-02T20:49:04.130886769Z" level=info msg="CreateContainer within sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:49:04.152552 env[1130]: time="2023-10-02T20:49:04.152484816Z" level=info msg="CreateContainer within sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\"" Oct 2 20:49:04.153390 env[1130]: time="2023-10-02T20:49:04.153347048Z" level=info msg="StartContainer for \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\"" Oct 2 20:49:04.196064 systemd[1]: Started cri-containerd-981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3.scope. Oct 2 20:49:04.207644 systemd[1]: cri-containerd-981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3.scope: Deactivated successfully. Oct 2 20:49:04.216860 env[1130]: time="2023-10-02T20:49:04.216788715Z" level=info msg="shim disconnected" id=981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3 Oct 2 20:49:04.216860 env[1130]: time="2023-10-02T20:49:04.216857932Z" level=warning msg="cleaning up after shim disconnected" id=981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3 namespace=k8s.io Oct 2 20:49:04.217214 env[1130]: time="2023-10-02T20:49:04.216872165Z" level=info msg="cleaning up dead shim" Oct 2 20:49:04.228784 env[1130]: time="2023-10-02T20:49:04.228698712Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2215 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:04Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:04.229159 env[1130]: time="2023-10-02T20:49:04.229068659Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 20:49:04.232849 env[1130]: time="2023-10-02T20:49:04.232788551Z" level=error msg="Failed to pipe stderr of container \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\"" error="reading from a closed fifo" Oct 2 20:49:04.232849 env[1130]: time="2023-10-02T20:49:04.232796250Z" level=error msg="Failed to pipe stdout of container \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\"" error="reading from a closed fifo" Oct 2 20:49:04.235047 env[1130]: time="2023-10-02T20:49:04.234984069Z" level=error msg="StartContainer for \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:04.235343 kubelet[1523]: E1002 20:49:04.235292 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3" Oct 2 20:49:04.235878 kubelet[1523]: E1002 20:49:04.235851 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:04.235878 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:04.235878 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:04.235878 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-mjzqx,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-74bdl_kube-system(cecf530e-b658-46d6-add3-02cd346fe2a4): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:04.236190 kubelet[1523]: E1002 20:49:04.235916 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-74bdl" podUID=cecf530e-b658-46d6-add3-02cd346fe2a4 Oct 2 20:49:04.468119 kubelet[1523]: E1002 20:49:04.468078 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:05.132677 kubelet[1523]: I1002 20:49:05.132620 1523 scope.go:115] "RemoveContainer" containerID="e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa" Oct 2 20:49:05.133187 env[1130]: time="2023-10-02T20:49:05.133135143Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:05.133813 env[1130]: time="2023-10-02T20:49:05.133215497Z" level=info msg="Container to stop \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:49:05.133813 env[1130]: time="2023-10-02T20:49:05.133240251Z" level=info msg="Container to stop \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:49:05.135070 env[1130]: time="2023-10-02T20:49:05.135027386Z" level=info msg="RemoveContainer for \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\"" Oct 2 20:49:05.143568 systemd[1]: run-containerd-runc-k8s.io-981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3-runc.krSlaB.mount: Deactivated successfully. Oct 2 20:49:05.143707 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3-rootfs.mount: Deactivated successfully. Oct 2 20:49:05.143822 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68-shm.mount: Deactivated successfully. Oct 2 20:49:05.146567 env[1130]: time="2023-10-02T20:49:05.146513702Z" level=info msg="RemoveContainer for \"e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa\" returns successfully" Oct 2 20:49:05.149292 systemd[1]: cri-containerd-c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68.scope: Deactivated successfully. Oct 2 20:49:05.148000 audit: BPF prog-id=84 op=UNLOAD Oct 2 20:49:05.152000 audit: BPF prog-id=87 op=UNLOAD Oct 2 20:49:05.179636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68-rootfs.mount: Deactivated successfully. Oct 2 20:49:05.184612 env[1130]: time="2023-10-02T20:49:05.184543238Z" level=info msg="shim disconnected" id=c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68 Oct 2 20:49:05.184885 env[1130]: time="2023-10-02T20:49:05.184619633Z" level=warning msg="cleaning up after shim disconnected" id=c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68 namespace=k8s.io Oct 2 20:49:05.184885 env[1130]: time="2023-10-02T20:49:05.184634362Z" level=info msg="cleaning up dead shim" Oct 2 20:49:05.196223 env[1130]: time="2023-10-02T20:49:05.196182921Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:05Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2246 runtime=io.containerd.runc.v2\n" Oct 2 20:49:05.196769 env[1130]: time="2023-10-02T20:49:05.196711244Z" level=info msg="TearDown network for sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" successfully" Oct 2 20:49:05.196906 env[1130]: time="2023-10-02T20:49:05.196885780Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" returns successfully" Oct 2 20:49:05.319271 kubelet[1523]: I1002 20:49:05.319213 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecf530e-b658-46d6-add3-02cd346fe2a4-clustermesh-secrets\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319271 kubelet[1523]: I1002 20:49:05.319279 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-config-path\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319308 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-net\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319333 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-run\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319362 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-xtables-lock\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319387 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-bpf-maps\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319411 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cni-path\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319599 kubelet[1523]: I1002 20:49:05.319437 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-lib-modules\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319472 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjzqx\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-kube-api-access-mjzqx\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319500 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-hostproc\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319528 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-etc-cni-netd\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319561 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-kernel\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319597 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-hubble-tls\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.319965 kubelet[1523]: I1002 20:49:05.319629 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-cgroup\") pod \"cecf530e-b658-46d6-add3-02cd346fe2a4\" (UID: \"cecf530e-b658-46d6-add3-02cd346fe2a4\") " Oct 2 20:49:05.320298 kubelet[1523]: I1002 20:49:05.319689 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.322762 kubelet[1523]: I1002 20:49:05.320422 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cni-path" (OuterVolumeSpecName: "cni-path") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.322762 kubelet[1523]: W1002 20:49:05.320663 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/cecf530e-b658-46d6-add3-02cd346fe2a4/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:49:05.323592 kubelet[1523]: I1002 20:49:05.323540 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:49:05.325779 systemd[1]: var-lib-kubelet-pods-cecf530e\x2db658\x2d46d6\x2dadd3\x2d02cd346fe2a4-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:49:05.327265 kubelet[1523]: I1002 20:49:05.326845 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327265 kubelet[1523]: I1002 20:49:05.326882 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327265 kubelet[1523]: I1002 20:49:05.326907 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327265 kubelet[1523]: I1002 20:49:05.326931 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327265 kubelet[1523]: I1002 20:49:05.326983 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-hostproc" (OuterVolumeSpecName: "hostproc") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327586 kubelet[1523]: I1002 20:49:05.327009 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327586 kubelet[1523]: I1002 20:49:05.327391 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cecf530e-b658-46d6-add3-02cd346fe2a4-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:49:05.327586 kubelet[1523]: I1002 20:49:05.327430 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.327586 kubelet[1523]: I1002 20:49:05.327459 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:49:05.333160 systemd[1]: var-lib-kubelet-pods-cecf530e\x2db658\x2d46d6\x2dadd3\x2d02cd346fe2a4-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dmjzqx.mount: Deactivated successfully. Oct 2 20:49:05.334256 kubelet[1523]: I1002 20:49:05.334221 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-kube-api-access-mjzqx" (OuterVolumeSpecName: "kube-api-access-mjzqx") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "kube-api-access-mjzqx". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:49:05.336942 systemd[1]: var-lib-kubelet-pods-cecf530e\x2db658\x2d46d6\x2dadd3\x2d02cd346fe2a4-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:49:05.337133 kubelet[1523]: I1002 20:49:05.336943 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "cecf530e-b658-46d6-add3-02cd346fe2a4" (UID: "cecf530e-b658-46d6-add3-02cd346fe2a4"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:49:05.420606 kubelet[1523]: I1002 20:49:05.420439 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-cgroup\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.420606 kubelet[1523]: I1002 20:49:05.420491 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-net\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.420606 kubelet[1523]: I1002 20:49:05.420507 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-run\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.420606 kubelet[1523]: I1002 20:49:05.420522 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/cecf530e-b658-46d6-add3-02cd346fe2a4-clustermesh-secrets\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.420606 kubelet[1523]: I1002 20:49:05.420540 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/cecf530e-b658-46d6-add3-02cd346fe2a4-cilium-config-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423280 kubelet[1523]: I1002 20:49:05.420580 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-mjzqx\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-kube-api-access-mjzqx\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423312 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-hostproc\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423334 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-etc-cni-netd\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423350 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-xtables-lock\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423365 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-bpf-maps\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423384 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-cni-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423399 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-lib-modules\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423425 kubelet[1523]: I1002 20:49:05.423417 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/cecf530e-b658-46d6-add3-02cd346fe2a4-host-proc-sys-kernel\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.423700 kubelet[1523]: I1002 20:49:05.423434 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/cecf530e-b658-46d6-add3-02cd346fe2a4-hubble-tls\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:49:05.468667 kubelet[1523]: E1002 20:49:05.468630 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:06.136282 kubelet[1523]: I1002 20:49:06.136250 1523 scope.go:115] "RemoveContainer" containerID="981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3" Oct 2 20:49:06.137812 env[1130]: time="2023-10-02T20:49:06.137765482Z" level=info msg="RemoveContainer for \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\"" Oct 2 20:49:06.142552 env[1130]: time="2023-10-02T20:49:06.142510551Z" level=info msg="RemoveContainer for \"981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3\" returns successfully" Oct 2 20:49:06.142927 systemd[1]: Removed slice kubepods-burstable-podcecf530e_b658_46d6_add3_02cd346fe2a4.slice. Oct 2 20:49:06.209875 kubelet[1523]: W1002 20:49:06.209812 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecf530e_b658_46d6_add3_02cd346fe2a4.slice/cri-containerd-e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa.scope WatchSource:0}: container "e5f3b0cabde4ee6c15061ab9ce229d1b902b78d7cb014cf3c99474b4b1a9c3aa" in namespace "k8s.io": not found Oct 2 20:49:06.210766 kubelet[1523]: W1002 20:49:06.210675 1523 container.go:488] Failed to get RecentStats("/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcecf530e_b658_46d6_add3_02cd346fe2a4.slice/cri-containerd-981939af3bcde1cdfcfba97d4867b46bddb450f643eb5674317d248311c2b8f3.scope") while determining the next housekeeping: unable to find data in memory cache Oct 2 20:49:06.470082 kubelet[1523]: E1002 20:49:06.470033 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:06.600859 env[1130]: time="2023-10-02T20:49:06.600558242Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:06.600859 env[1130]: time="2023-10-02T20:49:06.600688007Z" level=info msg="TearDown network for sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" successfully" Oct 2 20:49:06.600859 env[1130]: time="2023-10-02T20:49:06.600764741Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" returns successfully" Oct 2 20:49:06.601856 kubelet[1523]: I1002 20:49:06.601829 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=cecf530e-b658-46d6-add3-02cd346fe2a4 path="/var/lib/kubelet/pods/cecf530e-b658-46d6-add3-02cd346fe2a4/volumes" Oct 2 20:49:06.725055 kubelet[1523]: I1002 20:49:06.724566 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:49:06.725055 kubelet[1523]: E1002 20:49:06.724631 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cecf530e-b658-46d6-add3-02cd346fe2a4" containerName="mount-cgroup" Oct 2 20:49:06.725055 kubelet[1523]: I1002 20:49:06.724670 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="cecf530e-b658-46d6-add3-02cd346fe2a4" containerName="mount-cgroup" Oct 2 20:49:06.725055 kubelet[1523]: I1002 20:49:06.724681 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="cecf530e-b658-46d6-add3-02cd346fe2a4" containerName="mount-cgroup" Oct 2 20:49:06.728927 kubelet[1523]: I1002 20:49:06.728901 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 20:49:06.730435 kubelet[1523]: E1002 20:49:06.729195 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="cecf530e-b658-46d6-add3-02cd346fe2a4" containerName="mount-cgroup" Oct 2 20:49:06.731854 systemd[1]: Created slice kubepods-besteffort-pod348e0949_636b_42d3_8fc0_4cd87cc33691.slice. Oct 2 20:49:06.733294 kubelet[1523]: I1002 20:49:06.732554 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk46t\" (UniqueName: \"kubernetes.io/projected/348e0949-636b-42d3-8fc0-4cd87cc33691-kube-api-access-fk46t\") pod \"cilium-operator-69b677f97c-j4h8m\" (UID: \"348e0949-636b-42d3-8fc0-4cd87cc33691\") " pod="kube-system/cilium-operator-69b677f97c-j4h8m" Oct 2 20:49:06.733639 kubelet[1523]: I1002 20:49:06.733564 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/348e0949-636b-42d3-8fc0-4cd87cc33691-cilium-config-path\") pod \"cilium-operator-69b677f97c-j4h8m\" (UID: \"348e0949-636b-42d3-8fc0-4cd87cc33691\") " pod="kube-system/cilium-operator-69b677f97c-j4h8m" Oct 2 20:49:06.739759 kubelet[1523]: W1002 20:49:06.739715 1523 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.739936 kubelet[1523]: E1002 20:49:06.739920 1523 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.740567 systemd[1]: Created slice kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice. Oct 2 20:49:06.741507 kubelet[1523]: W1002 20:49:06.741479 1523 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.741628 kubelet[1523]: E1002 20:49:06.741512 1523 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.741628 kubelet[1523]: W1002 20:49:06.741575 1523 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.741628 kubelet[1523]: E1002 20:49:06.741593 1523 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.25" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.25' and this object Oct 2 20:49:06.835132 kubelet[1523]: I1002 20:49:06.835087 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-xtables-lock\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835174 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-etc-cni-netd\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835206 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-net\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835238 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-kernel\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835284 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-bpf-maps\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835317 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cni-path\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835375 kubelet[1523]: I1002 20:49:06.835360 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-config-path\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835395 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-ipsec-secrets\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835429 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-run\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835464 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-hostproc\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835500 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5kb7q\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-kube-api-access-5kb7q\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835538 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-cgroup\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.835760 kubelet[1523]: I1002 20:49:06.835573 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-lib-modules\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.836108 kubelet[1523]: I1002 20:49:06.835611 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:06.836108 kubelet[1523]: I1002 20:49:06.835673 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-hubble-tls\") pod \"cilium-lvm8b\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " pod="kube-system/cilium-lvm8b" Oct 2 20:49:07.038043 env[1130]: time="2023-10-02T20:49:07.037879375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-j4h8m,Uid:348e0949-636b-42d3-8fc0-4cd87cc33691,Namespace:kube-system,Attempt:0,}" Oct 2 20:49:07.062524 env[1130]: time="2023-10-02T20:49:07.062284512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:49:07.062524 env[1130]: time="2023-10-02T20:49:07.062338039Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:49:07.062524 env[1130]: time="2023-10-02T20:49:07.062356765Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:49:07.063028 env[1130]: time="2023-10-02T20:49:07.062963221Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c pid=2274 runtime=io.containerd.runc.v2 Oct 2 20:49:07.092409 systemd[1]: Started cri-containerd-6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c.scope. Oct 2 20:49:07.136980 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 20:49:07.137181 kernel: audit: type=1400 audit(1696279747.110:754): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158100 kernel: audit: type=1400 audit(1696279747.110:755): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.207175 kernel: audit: type=1400 audit(1696279747.110:756): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.207384 kernel: audit: type=1400 audit(1696279747.110:757): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.228423 kernel: audit: type=1400 audit(1696279747.110:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.249972 kernel: audit: type=1400 audit(1696279747.110:759): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.255278 env[1130]: time="2023-10-02T20:49:07.252441806Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-j4h8m,Uid:348e0949-636b-42d3-8fc0-4cd87cc33691,Namespace:kube-system,Attempt:0,} returns sandbox id \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\"" Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.272758 kernel: audit: type=1400 audit(1696279747.110:760): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.272865 kernel: audit: type=1400 audit(1696279747.110:761): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.273559 kubelet[1523]: E1002 20:49:07.273376 1523 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 20:49:07.274054 env[1130]: time="2023-10-02T20:49:07.274006431Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 20:49:07.110000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.313101 kernel: audit: type=1400 audit(1696279747.110:762): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.313297 kernel: audit: type=1400 audit(1696279747.157:763): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.157000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.157000 audit: BPF prog-id=88 op=LOAD Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2274 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:07.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396663386434653431383266393533633635653837313566663636 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2274 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:07.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396663386434653431383266393533633635653837313566663636 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit: BPF prog-id=89 op=LOAD Oct 2 20:49:07.158000 audit[2285]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bd9d8 a2=78 a3=c000248140 items=0 ppid=2274 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:07.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396663386434653431383266393533633635653837313566663636 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit: BPF prog-id=90 op=LOAD Oct 2 20:49:07.158000 audit[2285]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c0001bd770 a2=78 a3=c000248188 items=0 ppid=2274 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:07.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396663386434653431383266393533633635653837313566663636 Oct 2 20:49:07.158000 audit: BPF prog-id=90 op=UNLOAD Oct 2 20:49:07.158000 audit: BPF prog-id=89 op=UNLOAD Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { perfmon } for pid=2285 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit[2285]: AVC avc: denied { bpf } for pid=2285 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:07.158000 audit: BPF prog-id=91 op=LOAD Oct 2 20:49:07.158000 audit[2285]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001bdc30 a2=78 a3=c000248598 items=0 ppid=2274 pid=2285 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:07.158000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3666396663386434653431383266393533633635653837313566663636 Oct 2 20:49:07.470378 kubelet[1523]: E1002 20:49:07.470311 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:07.474128 kubelet[1523]: E1002 20:49:07.474089 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:07.868666 systemd[1]: run-containerd-runc-k8s.io-6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c-runc.fkoSFm.mount: Deactivated successfully. Oct 2 20:49:07.937446 kubelet[1523]: E1002 20:49:07.937395 1523 secret.go:192] Couldn't get secret kube-system/cilium-clustermesh: failed to sync secret cache: timed out waiting for the condition Oct 2 20:49:07.937640 kubelet[1523]: E1002 20:49:07.937543 1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets podName:f6c92125-d382-4849-aa97-42e67f5f17b0 nodeName:}" failed. No retries permitted until 2023-10-02 20:49:08.437508414 +0000 UTC m=+217.197350172 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "clustermesh-secrets" (UniqueName: "kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets") pod "cilium-lvm8b" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0") : failed to sync secret cache: timed out waiting for the condition Oct 2 20:49:08.189848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1277310590.mount: Deactivated successfully. Oct 2 20:49:08.471155 kubelet[1523]: E1002 20:49:08.471121 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:08.547855 env[1130]: time="2023-10-02T20:49:08.547801359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvm8b,Uid:f6c92125-d382-4849-aa97-42e67f5f17b0,Namespace:kube-system,Attempt:0,}" Oct 2 20:49:08.591798 env[1130]: time="2023-10-02T20:49:08.591538791Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 20:49:08.591798 env[1130]: time="2023-10-02T20:49:08.591592415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 20:49:08.591798 env[1130]: time="2023-10-02T20:49:08.591611408Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 20:49:08.592286 env[1130]: time="2023-10-02T20:49:08.592179639Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0 pid=2319 runtime=io.containerd.runc.v2 Oct 2 20:49:08.623575 systemd[1]: Started cri-containerd-86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0.scope. Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.649000 audit: BPF prog-id=92 op=LOAD Oct 2 20:49:08.651000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.651000 audit[2327]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2319 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:08.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353335646537393431323235393037653439316239353066666363 Oct 2 20:49:08.651000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.651000 audit[2327]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2319 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:08.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353335646537393431323235393037653439316239353066666363 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.652000 audit: BPF prog-id=93 op=LOAD Oct 2 20:49:08.652000 audit[2327]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c00024a560 items=0 ppid=2319 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:08.652000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353335646537393431323235393037653439316239353066666363 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.653000 audit: BPF prog-id=94 op=LOAD Oct 2 20:49:08.653000 audit[2327]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c00024a5a8 items=0 ppid=2319 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:08.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353335646537393431323235393037653439316239353066666363 Oct 2 20:49:08.654000 audit: BPF prog-id=94 op=UNLOAD Oct 2 20:49:08.654000 audit: BPF prog-id=93 op=UNLOAD Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { perfmon } for pid=2327 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit[2327]: AVC avc: denied { bpf } for pid=2327 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:08.654000 audit: BPF prog-id=95 op=LOAD Oct 2 20:49:08.654000 audit[2327]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c00024a9b8 items=0 ppid=2319 pid=2327 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:08.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3836353335646537393431323235393037653439316239353066666363 Oct 2 20:49:08.688590 env[1130]: time="2023-10-02T20:49:08.688536804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lvm8b,Uid:f6c92125-d382-4849-aa97-42e67f5f17b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\"" Oct 2 20:49:08.692823 env[1130]: time="2023-10-02T20:49:08.692778396Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 20:49:08.716966 env[1130]: time="2023-10-02T20:49:08.716898012Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" Oct 2 20:49:08.717750 env[1130]: time="2023-10-02T20:49:08.717687278Z" level=info msg="StartContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" Oct 2 20:49:08.750532 systemd[1]: Started cri-containerd-a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0.scope. Oct 2 20:49:08.769426 systemd[1]: cri-containerd-a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0.scope: Deactivated successfully. Oct 2 20:49:08.877024 env[1130]: time="2023-10-02T20:49:08.876955462Z" level=info msg="shim disconnected" id=a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0 Oct 2 20:49:08.877387 env[1130]: time="2023-10-02T20:49:08.877027081Z" level=warning msg="cleaning up after shim disconnected" id=a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0 namespace=k8s.io Oct 2 20:49:08.877387 env[1130]: time="2023-10-02T20:49:08.877042632Z" level=info msg="cleaning up dead shim" Oct 2 20:49:08.890328 env[1130]: time="2023-10-02T20:49:08.890260301Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2380 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:08.890692 env[1130]: time="2023-10-02T20:49:08.890613591Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 20:49:08.891915 env[1130]: time="2023-10-02T20:49:08.891853490Z" level=error msg="Failed to pipe stdout of container \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" error="reading from a closed fifo" Oct 2 20:49:08.892039 env[1130]: time="2023-10-02T20:49:08.891950539Z" level=error msg="Failed to pipe stderr of container \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" error="reading from a closed fifo" Oct 2 20:49:08.894596 env[1130]: time="2023-10-02T20:49:08.894532338Z" level=error msg="StartContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:08.895562 kubelet[1523]: E1002 20:49:08.894965 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0" Oct 2 20:49:08.895562 kubelet[1523]: E1002 20:49:08.895156 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:08.895562 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:08.895562 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:08.895993 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5kb7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:08.896150 kubelet[1523]: E1002 20:49:08.895534 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:09.149916 env[1130]: time="2023-10-02T20:49:09.149784888Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 20:49:09.178187 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1098061343.mount: Deactivated successfully. Oct 2 20:49:09.187669 env[1130]: time="2023-10-02T20:49:09.187615332Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" Oct 2 20:49:09.189002 env[1130]: time="2023-10-02T20:49:09.188962350Z" level=info msg="StartContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" Oct 2 20:49:09.225981 systemd[1]: Started cri-containerd-ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0.scope. Oct 2 20:49:09.242404 env[1130]: time="2023-10-02T20:49:09.242342402Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:49:09.246650 systemd[1]: cri-containerd-ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0.scope: Deactivated successfully. Oct 2 20:49:09.247568 env[1130]: time="2023-10-02T20:49:09.246828644Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:49:09.251423 env[1130]: time="2023-10-02T20:49:09.251365060Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 20:49:09.253042 env[1130]: time="2023-10-02T20:49:09.252979878Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 20:49:09.260029 env[1130]: time="2023-10-02T20:49:09.259980746Z" level=info msg="CreateContainer within sandbox \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 20:49:09.316207 env[1130]: time="2023-10-02T20:49:09.316092468Z" level=info msg="shim disconnected" id=ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0 Oct 2 20:49:09.316207 env[1130]: time="2023-10-02T20:49:09.316165770Z" level=warning msg="cleaning up after shim disconnected" id=ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0 namespace=k8s.io Oct 2 20:49:09.316207 env[1130]: time="2023-10-02T20:49:09.316181506Z" level=info msg="cleaning up dead shim" Oct 2 20:49:09.325899 env[1130]: time="2023-10-02T20:49:09.325835967Z" level=info msg="CreateContainer within sandbox \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\"" Oct 2 20:49:09.327023 env[1130]: time="2023-10-02T20:49:09.326977909Z" level=info msg="StartContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\"" Oct 2 20:49:09.330190 env[1130]: time="2023-10-02T20:49:09.330125327Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:09Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2418 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:09Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:09.330749 env[1130]: time="2023-10-02T20:49:09.330659348Z" level=error msg="copy shim log" error="read /proc/self/fd/46: file already closed" Oct 2 20:49:09.332338 env[1130]: time="2023-10-02T20:49:09.332283560Z" level=error msg="Failed to pipe stdout of container \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" error="reading from a closed fifo" Oct 2 20:49:09.332595 env[1130]: time="2023-10-02T20:49:09.332529751Z" level=error msg="Failed to pipe stderr of container \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" error="reading from a closed fifo" Oct 2 20:49:09.335667 env[1130]: time="2023-10-02T20:49:09.335616104Z" level=error msg="StartContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:09.336146 kubelet[1523]: E1002 20:49:09.336105 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0" Oct 2 20:49:09.336806 kubelet[1523]: E1002 20:49:09.336773 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:09.336806 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:09.336806 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:09.336806 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5kb7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:09.337071 kubelet[1523]: E1002 20:49:09.336844 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:09.354360 systemd[1]: Started cri-containerd-32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa.scope. Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.373000 audit: BPF prog-id=96 op=LOAD Oct 2 20:49:09.374000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.374000 audit[2438]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000149c48 a2=10 a3=1c items=0 ppid=2274 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:09.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653666616363343233323234323861613664663233336330623665 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001496b0 a2=3c a3=8 items=0 ppid=2274 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:09.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653666616363343233323234323861613664663233336330623665 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit: BPF prog-id=97 op=LOAD Oct 2 20:49:09.375000 audit[2438]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001499d8 a2=78 a3=c0003cc0b0 items=0 ppid=2274 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:09.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653666616363343233323234323861613664663233336330623665 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit: BPF prog-id=98 op=LOAD Oct 2 20:49:09.375000 audit[2438]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000149770 a2=78 a3=c0003cc0f8 items=0 ppid=2274 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:09.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653666616363343233323234323861613664663233336330623665 Oct 2 20:49:09.375000 audit: BPF prog-id=98 op=UNLOAD Oct 2 20:49:09.375000 audit: BPF prog-id=97 op=UNLOAD Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { perfmon } for pid=2438 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit[2438]: AVC avc: denied { bpf } for pid=2438 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 20:49:09.375000 audit: BPF prog-id=99 op=LOAD Oct 2 20:49:09.375000 audit[2438]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000149c30 a2=78 a3=c0003cc508 items=0 ppid=2274 pid=2438 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 20:49:09.375000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3332653666616363343233323234323861613664663233336330623665 Oct 2 20:49:09.399887 env[1130]: time="2023-10-02T20:49:09.399826198Z" level=info msg="StartContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" returns successfully" Oct 2 20:49:09.428000 audit[2449]: AVC avc: denied { map_create } for pid=2449 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c164,c414 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c164,c414 tclass=bpf permissive=0 Oct 2 20:49:09.428000 audit[2449]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c0004e57d0 a2=48 a3=c0004e57c0 items=0 ppid=2274 pid=2449 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c164,c414 key=(null) Oct 2 20:49:09.428000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 20:49:09.472753 kubelet[1523]: E1002 20:49:09.472681 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:09.865406 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0-rootfs.mount: Deactivated successfully. Oct 2 20:49:10.152857 kubelet[1523]: I1002 20:49:10.152705 1523 scope.go:115] "RemoveContainer" containerID="a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0" Oct 2 20:49:10.153420 kubelet[1523]: I1002 20:49:10.153393 1523 scope.go:115] "RemoveContainer" containerID="a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0" Oct 2 20:49:10.155583 env[1130]: time="2023-10-02T20:49:10.155537384Z" level=info msg="RemoveContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" Oct 2 20:49:10.157222 env[1130]: time="2023-10-02T20:49:10.157178270Z" level=info msg="RemoveContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\"" Oct 2 20:49:10.157598 env[1130]: time="2023-10-02T20:49:10.157523339Z" level=error msg="RemoveContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\" failed" error="failed to set removing state for container \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\": container is already in removing state" Oct 2 20:49:10.157914 kubelet[1523]: E1002 20:49:10.157866 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\": container is already in removing state" containerID="a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0" Oct 2 20:49:10.157914 kubelet[1523]: E1002 20:49:10.157918 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0": container is already in removing state; Skipping pod "cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)" Oct 2 20:49:10.158388 kubelet[1523]: E1002 20:49:10.158344 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:10.160932 env[1130]: time="2023-10-02T20:49:10.160869396Z" level=info msg="RemoveContainer for \"a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0\" returns successfully" Oct 2 20:49:10.473585 kubelet[1523]: E1002 20:49:10.473524 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:11.157294 kubelet[1523]: E1002 20:49:11.157252 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:11.474227 kubelet[1523]: E1002 20:49:11.474158 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:11.987952 kubelet[1523]: W1002 20:49:11.987899 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice/cri-containerd-a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0.scope WatchSource:0}: container "a654f8512e58b56b2999b90251c232361ac45299cfb319f39c74362fc2cc45f0" in namespace "k8s.io": not found Oct 2 20:49:12.320123 kubelet[1523]: E1002 20:49:12.319958 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:12.475149 kubelet[1523]: E1002 20:49:12.475051 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:12.475779 kubelet[1523]: E1002 20:49:12.475617 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:13.475346 kubelet[1523]: E1002 20:49:13.475269 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:14.475790 kubelet[1523]: E1002 20:49:14.475707 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:15.097612 kubelet[1523]: W1002 20:49:15.097558 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice/cri-containerd-ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0.scope WatchSource:0}: task ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0 not found: not found Oct 2 20:49:15.476922 kubelet[1523]: E1002 20:49:15.476843 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:16.477523 kubelet[1523]: E1002 20:49:16.477449 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:17.477294 kubelet[1523]: E1002 20:49:17.477225 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:17.478404 kubelet[1523]: E1002 20:49:17.478370 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:18.479271 kubelet[1523]: E1002 20:49:18.479200 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:19.480055 kubelet[1523]: E1002 20:49:19.479980 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:20.481062 kubelet[1523]: E1002 20:49:20.480991 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:21.481489 kubelet[1523]: E1002 20:49:21.481419 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:22.478133 kubelet[1523]: E1002 20:49:22.478085 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:22.482330 kubelet[1523]: E1002 20:49:22.482288 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:23.482957 kubelet[1523]: E1002 20:49:23.482885 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:23.603163 env[1130]: time="2023-10-02T20:49:23.603103010Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 20:49:23.619025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3612884119.mount: Deactivated successfully. Oct 2 20:49:23.632380 env[1130]: time="2023-10-02T20:49:23.632303252Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" Oct 2 20:49:23.633698 env[1130]: time="2023-10-02T20:49:23.633641830Z" level=info msg="StartContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" Oct 2 20:49:23.664803 systemd[1]: Started cri-containerd-d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71.scope. Oct 2 20:49:23.678911 systemd[1]: cri-containerd-d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71.scope: Deactivated successfully. Oct 2 20:49:23.704827 env[1130]: time="2023-10-02T20:49:23.704752547Z" level=info msg="shim disconnected" id=d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71 Oct 2 20:49:23.704827 env[1130]: time="2023-10-02T20:49:23.704830658Z" level=warning msg="cleaning up after shim disconnected" id=d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71 namespace=k8s.io Oct 2 20:49:23.704827 env[1130]: time="2023-10-02T20:49:23.704845765Z" level=info msg="cleaning up dead shim" Oct 2 20:49:23.716815 env[1130]: time="2023-10-02T20:49:23.716703933Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2492 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:23Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:23.717167 env[1130]: time="2023-10-02T20:49:23.717087948Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 20:49:23.717467 env[1130]: time="2023-10-02T20:49:23.717407781Z" level=error msg="Failed to pipe stdout of container \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" error="reading from a closed fifo" Oct 2 20:49:23.717574 env[1130]: time="2023-10-02T20:49:23.717481064Z" level=error msg="Failed to pipe stderr of container \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" error="reading from a closed fifo" Oct 2 20:49:23.719571 env[1130]: time="2023-10-02T20:49:23.719515118Z" level=error msg="StartContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:23.719886 kubelet[1523]: E1002 20:49:23.719859 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71" Oct 2 20:49:23.720057 kubelet[1523]: E1002 20:49:23.719999 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:23.720057 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:23.720057 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:23.720057 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5kb7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:23.720327 kubelet[1523]: E1002 20:49:23.720061 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:24.183387 kubelet[1523]: I1002 20:49:24.183348 1523 scope.go:115] "RemoveContainer" containerID="ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0" Oct 2 20:49:24.183896 kubelet[1523]: I1002 20:49:24.183868 1523 scope.go:115] "RemoveContainer" containerID="ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0" Oct 2 20:49:24.185761 env[1130]: time="2023-10-02T20:49:24.185466634Z" level=info msg="RemoveContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" Oct 2 20:49:24.186384 env[1130]: time="2023-10-02T20:49:24.186343573Z" level=info msg="RemoveContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\"" Oct 2 20:49:24.186506 env[1130]: time="2023-10-02T20:49:24.186448436Z" level=error msg="RemoveContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\" failed" error="failed to set removing state for container \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\": container is already in removing state" Oct 2 20:49:24.186755 kubelet[1523]: E1002 20:49:24.186685 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\": container is already in removing state" containerID="ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0" Oct 2 20:49:24.186913 kubelet[1523]: E1002 20:49:24.186889 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0": container is already in removing state; Skipping pod "cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)" Oct 2 20:49:24.187342 kubelet[1523]: E1002 20:49:24.187318 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:24.196364 env[1130]: time="2023-10-02T20:49:24.196310281Z" level=info msg="RemoveContainer for \"ac28a6e356560b4f8e3b40b77d4d796c631e50bb11ef56c0d2f444e8d93290b0\" returns successfully" Oct 2 20:49:24.483851 kubelet[1523]: E1002 20:49:24.483797 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:24.614382 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71-rootfs.mount: Deactivated successfully. Oct 2 20:49:25.484999 kubelet[1523]: E1002 20:49:25.484938 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:26.485764 kubelet[1523]: E1002 20:49:26.485698 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:26.811828 kubelet[1523]: W1002 20:49:26.811464 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice/cri-containerd-d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71.scope WatchSource:0}: task d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71 not found: not found Oct 2 20:49:27.479912 kubelet[1523]: E1002 20:49:27.479860 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:27.486240 kubelet[1523]: E1002 20:49:27.486162 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:28.486750 kubelet[1523]: E1002 20:49:28.486685 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:29.487394 kubelet[1523]: E1002 20:49:29.487312 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:30.488553 kubelet[1523]: E1002 20:49:30.488476 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:31.488673 kubelet[1523]: E1002 20:49:31.488620 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:32.319798 kubelet[1523]: E1002 20:49:32.319743 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:32.354292 env[1130]: time="2023-10-02T20:49:32.354217772Z" level=info msg="StopPodSandbox for \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\"" Oct 2 20:49:32.354821 env[1130]: time="2023-10-02T20:49:32.354356203Z" level=info msg="TearDown network for sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" successfully" Oct 2 20:49:32.354821 env[1130]: time="2023-10-02T20:49:32.354408188Z" level=info msg="StopPodSandbox for \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" returns successfully" Oct 2 20:49:32.357034 env[1130]: time="2023-10-02T20:49:32.355407796Z" level=info msg="RemovePodSandbox for \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\"" Oct 2 20:49:32.357034 env[1130]: time="2023-10-02T20:49:32.355468693Z" level=info msg="Forcibly stopping sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\"" Oct 2 20:49:32.357034 env[1130]: time="2023-10-02T20:49:32.355597766Z" level=info msg="TearDown network for sandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" successfully" Oct 2 20:49:32.359829 env[1130]: time="2023-10-02T20:49:32.359714298Z" level=info msg="RemovePodSandbox \"10ef2a74122061892e29d61dbfaac0acabd2f075958e55c66900bc0e2e9769a5\" returns successfully" Oct 2 20:49:32.360465 env[1130]: time="2023-10-02T20:49:32.360409767Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:32.360604 env[1130]: time="2023-10-02T20:49:32.360510672Z" level=info msg="TearDown network for sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" successfully" Oct 2 20:49:32.360604 env[1130]: time="2023-10-02T20:49:32.360562351Z" level=info msg="StopPodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" returns successfully" Oct 2 20:49:32.361096 env[1130]: time="2023-10-02T20:49:32.361057964Z" level=info msg="RemovePodSandbox for \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:32.361221 env[1130]: time="2023-10-02T20:49:32.361100265Z" level=info msg="Forcibly stopping sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\"" Oct 2 20:49:32.361221 env[1130]: time="2023-10-02T20:49:32.361196176Z" level=info msg="TearDown network for sandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" successfully" Oct 2 20:49:32.364761 env[1130]: time="2023-10-02T20:49:32.364702588Z" level=info msg="RemovePodSandbox \"c057c220c977b94445753f5c65e41df10f739b1e689670613545f9a6d434db68\" returns successfully" Oct 2 20:49:32.481291 kubelet[1523]: E1002 20:49:32.481247 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:32.489604 kubelet[1523]: E1002 20:49:32.489537 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:33.490873 kubelet[1523]: E1002 20:49:33.490801 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:34.492077 kubelet[1523]: E1002 20:49:34.492007 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:35.492771 kubelet[1523]: E1002 20:49:35.492699 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:35.600671 kubelet[1523]: E1002 20:49:35.600618 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:36.493890 kubelet[1523]: E1002 20:49:36.493810 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:37.482289 kubelet[1523]: E1002 20:49:37.482229 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:37.494660 kubelet[1523]: E1002 20:49:37.494603 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:38.495100 kubelet[1523]: E1002 20:49:38.495010 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:39.496259 kubelet[1523]: E1002 20:49:39.496188 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:40.496804 kubelet[1523]: E1002 20:49:40.496750 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:41.497876 kubelet[1523]: E1002 20:49:41.497814 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:42.483455 kubelet[1523]: E1002 20:49:42.483414 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:42.498753 kubelet[1523]: E1002 20:49:42.498676 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:43.499304 kubelet[1523]: E1002 20:49:43.499234 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:44.499635 kubelet[1523]: E1002 20:49:44.499563 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:45.500592 kubelet[1523]: E1002 20:49:45.500518 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:46.501261 kubelet[1523]: E1002 20:49:46.501191 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:47.484437 kubelet[1523]: E1002 20:49:47.484390 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:47.501899 kubelet[1523]: E1002 20:49:47.501830 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:47.603524 env[1130]: time="2023-10-02T20:49:47.603455561Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 20:49:47.618703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014975405.mount: Deactivated successfully. Oct 2 20:49:47.628100 env[1130]: time="2023-10-02T20:49:47.628041293Z" level=info msg="CreateContainer within sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\"" Oct 2 20:49:47.628830 env[1130]: time="2023-10-02T20:49:47.628793186Z" level=info msg="StartContainer for \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\"" Oct 2 20:49:47.660131 systemd[1]: Started cri-containerd-0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815.scope. Oct 2 20:49:47.676701 systemd[1]: cri-containerd-0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815.scope: Deactivated successfully. Oct 2 20:49:47.694700 env[1130]: time="2023-10-02T20:49:47.694603456Z" level=info msg="shim disconnected" id=0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815 Oct 2 20:49:47.694700 env[1130]: time="2023-10-02T20:49:47.694679645Z" level=warning msg="cleaning up after shim disconnected" id=0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815 namespace=k8s.io Oct 2 20:49:47.694700 env[1130]: time="2023-10-02T20:49:47.694697209Z" level=info msg="cleaning up dead shim" Oct 2 20:49:47.707133 env[1130]: time="2023-10-02T20:49:47.707064155Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:49:47Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2536 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T20:49:47Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 20:49:47.707490 env[1130]: time="2023-10-02T20:49:47.707399959Z" level=error msg="copy shim log" error="read /proc/self/fd/38: file already closed" Oct 2 20:49:47.710551 env[1130]: time="2023-10-02T20:49:47.710500877Z" level=error msg="Failed to pipe stderr of container \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\"" error="reading from a closed fifo" Oct 2 20:49:47.710551 env[1130]: time="2023-10-02T20:49:47.710492957Z" level=error msg="Failed to pipe stdout of container \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\"" error="reading from a closed fifo" Oct 2 20:49:47.712996 env[1130]: time="2023-10-02T20:49:47.712935854Z" level=error msg="StartContainer for \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 20:49:47.713293 kubelet[1523]: E1002 20:49:47.713267 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815" Oct 2 20:49:47.713454 kubelet[1523]: E1002 20:49:47.713422 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 20:49:47.713454 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 20:49:47.713454 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 20:49:47.713454 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-5kb7q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 20:49:47.713756 kubelet[1523]: E1002 20:49:47.713476 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:48.232237 kubelet[1523]: I1002 20:49:48.232086 1523 scope.go:115] "RemoveContainer" containerID="d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71" Oct 2 20:49:48.232522 kubelet[1523]: I1002 20:49:48.232499 1523 scope.go:115] "RemoveContainer" containerID="d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71" Oct 2 20:49:48.234100 env[1130]: time="2023-10-02T20:49:48.234051357Z" level=info msg="RemoveContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" Oct 2 20:49:48.234943 env[1130]: time="2023-10-02T20:49:48.234905623Z" level=info msg="RemoveContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\"" Oct 2 20:49:48.235254 env[1130]: time="2023-10-02T20:49:48.235208300Z" level=error msg="RemoveContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\" failed" error="failed to set removing state for container \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\": container is already in removing state" Oct 2 20:49:48.235596 kubelet[1523]: E1002 20:49:48.235564 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\": container is already in removing state" containerID="d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71" Oct 2 20:49:48.235713 kubelet[1523]: E1002 20:49:48.235625 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71": container is already in removing state; Skipping pod "cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)" Oct 2 20:49:48.236357 kubelet[1523]: E1002 20:49:48.236331 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:49:48.238800 env[1130]: time="2023-10-02T20:49:48.238759537Z" level=info msg="RemoveContainer for \"d83fc4ab5859eff4a8b1d597a4f1357c3d20bbb078e2da37a6f78046b93d3a71\" returns successfully" Oct 2 20:49:48.502531 kubelet[1523]: E1002 20:49:48.502366 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:48.614739 systemd[1]: run-containerd-runc-k8s.io-0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815-runc.tZs3FG.mount: Deactivated successfully. Oct 2 20:49:48.614889 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815-rootfs.mount: Deactivated successfully. Oct 2 20:49:49.502604 kubelet[1523]: E1002 20:49:49.502541 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:50.503345 kubelet[1523]: E1002 20:49:50.503279 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:50.801881 kubelet[1523]: W1002 20:49:50.801652 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice/cri-containerd-0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815.scope WatchSource:0}: task 0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815 not found: not found Oct 2 20:49:51.503867 kubelet[1523]: E1002 20:49:51.503798 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:52.320441 kubelet[1523]: E1002 20:49:52.320365 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:52.485684 kubelet[1523]: E1002 20:49:52.485651 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:52.505018 kubelet[1523]: E1002 20:49:52.504956 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:53.505465 kubelet[1523]: E1002 20:49:53.505386 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:54.506476 kubelet[1523]: E1002 20:49:54.506398 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:54.799143 update_engine[1122]: I1002 20:49:54.798786 1122 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 20:49:54.799143 update_engine[1122]: I1002 20:49:54.798848 1122 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 20:49:54.800151 update_engine[1122]: I1002 20:49:54.800096 1122 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 20:49:54.800720 update_engine[1122]: I1002 20:49:54.800675 1122 omaha_request_params.cc:62] Current group set to lts Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.800916 1122 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.800935 1122 update_attempter.cc:638] Scheduling an action processor start. Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.800958 1122 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.801004 1122 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.801084 1122 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.801091 1122 omaha_request_action.cc:269] Request: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: Oct 2 20:49:54.801137 update_engine[1122]: I1002 20:49:54.801100 1122 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 20:49:54.802945 locksmithd[1166]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 20:49:54.803259 update_engine[1122]: I1002 20:49:54.802948 1122 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 20:49:54.803259 update_engine[1122]: I1002 20:49:54.803200 1122 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 20:49:55.507432 kubelet[1523]: E1002 20:49:55.507360 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:55.949663 update_engine[1122]: I1002 20:49:55.949254 1122 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 20:49:55.949663 update_engine[1122]: I1002 20:49:55.949580 1122 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 20:49:55.950254 update_engine[1122]: I1002 20:49:55.949826 1122 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 20:49:56.217675 update_engine[1122]: I1002 20:49:56.217595 1122 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 20:49:56.219217 update_engine[1122]: I1002 20:49:56.219166 1122 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 20:49:56.219217 update_engine[1122]: I1002 20:49:56.219192 1122 omaha_request_action.cc:619] Omaha request response: Oct 2 20:49:56.219217 update_engine[1122]: Oct 2 20:49:56.228706 update_engine[1122]: I1002 20:49:56.228648 1122 omaha_request_action.cc:409] No update. Oct 2 20:49:56.228706 update_engine[1122]: I1002 20:49:56.228689 1122 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 20:49:56.228706 update_engine[1122]: I1002 20:49:56.228698 1122 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 20:49:56.228706 update_engine[1122]: I1002 20:49:56.228706 1122 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 20:49:56.228706 update_engine[1122]: I1002 20:49:56.228711 1122 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 20:49:56.229145 update_engine[1122]: I1002 20:49:56.228718 1122 update_attempter.cc:302] Processing Done. Oct 2 20:49:56.229145 update_engine[1122]: I1002 20:49:56.228757 1122 update_attempter.cc:338] No update. Oct 2 20:49:56.229145 update_engine[1122]: I1002 20:49:56.228775 1122 update_check_scheduler.cc:74] Next update check in 47m19s Oct 2 20:49:56.229292 locksmithd[1166]: LastCheckedTime=1696279796 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 20:49:56.508245 kubelet[1523]: E1002 20:49:56.508054 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:57.487337 kubelet[1523]: E1002 20:49:57.487287 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:49:57.508638 kubelet[1523]: E1002 20:49:57.508571 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:58.509036 kubelet[1523]: E1002 20:49:58.508968 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:49:59.509464 kubelet[1523]: E1002 20:49:59.509396 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:00.509656 kubelet[1523]: E1002 20:50:00.509607 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:00.601062 kubelet[1523]: E1002 20:50:00.601011 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-lvm8b_kube-system(f6c92125-d382-4849-aa97-42e67f5f17b0)\"" pod="kube-system/cilium-lvm8b" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 Oct 2 20:50:01.510041 kubelet[1523]: E1002 20:50:01.509968 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:02.488908 kubelet[1523]: E1002 20:50:02.488867 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:50:02.511246 kubelet[1523]: E1002 20:50:02.511173 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:03.511896 kubelet[1523]: E1002 20:50:03.511826 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:04.512287 kubelet[1523]: E1002 20:50:04.512215 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:05.512413 kubelet[1523]: E1002 20:50:05.512351 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:06.513000 kubelet[1523]: E1002 20:50:06.512943 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:07.490478 kubelet[1523]: E1002 20:50:07.490441 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:50:07.513467 kubelet[1523]: E1002 20:50:07.513419 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:08.352372 env[1130]: time="2023-10-02T20:50:08.352268590Z" level=info msg="StopPodSandbox for \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\"" Oct 2 20:50:08.352372 env[1130]: time="2023-10-02T20:50:08.352360053Z" level=info msg="Container to stop \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:50:08.355901 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0-shm.mount: Deactivated successfully. Oct 2 20:50:08.364000 audit: BPF prog-id=92 op=UNLOAD Oct 2 20:50:08.365225 systemd[1]: cri-containerd-86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0.scope: Deactivated successfully. Oct 2 20:50:08.371893 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 20:50:08.372067 kernel: audit: type=1334 audit(1696279808.364:809): prog-id=92 op=UNLOAD Oct 2 20:50:08.380000 audit: BPF prog-id=95 op=UNLOAD Oct 2 20:50:08.388789 kernel: audit: type=1334 audit(1696279808.380:810): prog-id=95 op=UNLOAD Oct 2 20:50:08.405166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0-rootfs.mount: Deactivated successfully. Oct 2 20:50:08.413867 env[1130]: time="2023-10-02T20:50:08.413793554Z" level=info msg="shim disconnected" id=86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0 Oct 2 20:50:08.414153 env[1130]: time="2023-10-02T20:50:08.413875284Z" level=warning msg="cleaning up after shim disconnected" id=86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0 namespace=k8s.io Oct 2 20:50:08.414153 env[1130]: time="2023-10-02T20:50:08.413891009Z" level=info msg="cleaning up dead shim" Oct 2 20:50:08.424523 env[1130]: time="2023-10-02T20:50:08.424470616Z" level=info msg="StopContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" with timeout 30 (s)" Oct 2 20:50:08.425019 env[1130]: time="2023-10-02T20:50:08.424980596Z" level=info msg="Stop container \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" with signal terminated" Oct 2 20:50:08.430077 env[1130]: time="2023-10-02T20:50:08.430033003Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:50:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2571 runtime=io.containerd.runc.v2\n" Oct 2 20:50:08.430549 env[1130]: time="2023-10-02T20:50:08.430500083Z" level=info msg="TearDown network for sandbox \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" successfully" Oct 2 20:50:08.430549 env[1130]: time="2023-10-02T20:50:08.430531551Z" level=info msg="StopPodSandbox for \"86535de7941225907e491b950ffcc6f18ba307e7a9f51cc5fea4c7e92bba5fa0\" returns successfully" Oct 2 20:50:08.441471 systemd[1]: cri-containerd-32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa.scope: Deactivated successfully. Oct 2 20:50:08.441000 audit: BPF prog-id=96 op=UNLOAD Oct 2 20:50:08.449776 kernel: audit: type=1334 audit(1696279808.441:811): prog-id=96 op=UNLOAD Oct 2 20:50:08.450000 audit: BPF prog-id=99 op=UNLOAD Oct 2 20:50:08.458761 kernel: audit: type=1334 audit(1696279808.450:812): prog-id=99 op=UNLOAD Oct 2 20:50:08.473539 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa-rootfs.mount: Deactivated successfully. Oct 2 20:50:08.479346 env[1130]: time="2023-10-02T20:50:08.479281371Z" level=info msg="shim disconnected" id=32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa Oct 2 20:50:08.479346 env[1130]: time="2023-10-02T20:50:08.479337718Z" level=warning msg="cleaning up after shim disconnected" id=32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa namespace=k8s.io Oct 2 20:50:08.479680 env[1130]: time="2023-10-02T20:50:08.479353347Z" level=info msg="cleaning up dead shim" Oct 2 20:50:08.491283 env[1130]: time="2023-10-02T20:50:08.491220122Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:50:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2602 runtime=io.containerd.runc.v2\n" Oct 2 20:50:08.494052 env[1130]: time="2023-10-02T20:50:08.493986626Z" level=info msg="StopContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" returns successfully" Oct 2 20:50:08.494777 env[1130]: time="2023-10-02T20:50:08.494717150Z" level=info msg="StopPodSandbox for \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\"" Oct 2 20:50:08.500466 env[1130]: time="2023-10-02T20:50:08.494815628Z" level=info msg="Container to stop \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 20:50:08.497226 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c-shm.mount: Deactivated successfully. Oct 2 20:50:08.504686 systemd[1]: cri-containerd-6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c.scope: Deactivated successfully. Oct 2 20:50:08.504000 audit: BPF prog-id=88 op=UNLOAD Oct 2 20:50:08.513193 kubelet[1523]: I1002 20:50:08.513149 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-lib-modules\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513362 kubelet[1523]: I1002 20:50:08.513225 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-etc-cni-netd\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513362 kubelet[1523]: I1002 20:50:08.513290 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5kb7q\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-kube-api-access-5kb7q\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513362 kubelet[1523]: I1002 20:50:08.513327 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-bpf-maps\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513560 kubelet[1523]: I1002 20:50:08.513372 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cni-path\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513560 kubelet[1523]: I1002 20:50:08.513401 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-run\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513560 kubelet[1523]: I1002 20:50:08.513458 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-config-path\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513560 kubelet[1523]: I1002 20:50:08.513500 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-ipsec-secrets\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.513560 kubelet[1523]: I1002 20:50:08.513552 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-hostproc\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514216 kernel: audit: type=1334 audit(1696279808.504:813): prog-id=88 op=UNLOAD Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513583 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-cgroup\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513634 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-kernel\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513673 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513745 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-xtables-lock\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513786 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-hubble-tls\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514293 kubelet[1523]: I1002 20:50:08.513837 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-net\") pod \"f6c92125-d382-4849-aa97-42e67f5f17b0\" (UID: \"f6c92125-d382-4849-aa97-42e67f5f17b0\") " Oct 2 20:50:08.514648 kubelet[1523]: I1002 20:50:08.513940 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.514648 kubelet[1523]: I1002 20:50:08.514004 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.514648 kubelet[1523]: I1002 20:50:08.514033 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.515000 audit: BPF prog-id=91 op=UNLOAD Oct 2 20:50:08.516933 kubelet[1523]: I1002 20:50:08.514891 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-hostproc" (OuterVolumeSpecName: "hostproc") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.516933 kubelet[1523]: E1002 20:50:08.514958 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:08.516933 kubelet[1523]: I1002 20:50:08.514996 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.516933 kubelet[1523]: I1002 20:50:08.515027 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cni-path" (OuterVolumeSpecName: "cni-path") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.516933 kubelet[1523]: I1002 20:50:08.515054 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.516933 kubelet[1523]: W1002 20:50:08.515262 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/f6c92125-d382-4849-aa97-42e67f5f17b0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:50:08.519745 kubelet[1523]: I1002 20:50:08.517362 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.519745 kubelet[1523]: I1002 20:50:08.517420 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.519745 kubelet[1523]: I1002 20:50:08.519291 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:50:08.519745 kubelet[1523]: I1002 20:50:08.519604 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 20:50:08.523775 kernel: audit: type=1334 audit(1696279808.515:814): prog-id=91 op=UNLOAD Oct 2 20:50:08.531224 systemd[1]: var-lib-kubelet-pods-f6c92125\x2dd382\x2d4849\x2daa97\x2d42e67f5f17b0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 20:50:08.534035 kubelet[1523]: I1002 20:50:08.533984 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:50:08.534170 kubelet[1523]: I1002 20:50:08.534120 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-kube-api-access-5kb7q" (OuterVolumeSpecName: "kube-api-access-5kb7q") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "kube-api-access-5kb7q". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:50:08.538318 kubelet[1523]: I1002 20:50:08.538276 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 20:50:08.539918 kubelet[1523]: I1002 20:50:08.539848 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f6c92125-d382-4849-aa97-42e67f5f17b0" (UID: "f6c92125-d382-4849-aa97-42e67f5f17b0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:50:08.562604 env[1130]: time="2023-10-02T20:50:08.562524642Z" level=info msg="shim disconnected" id=6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c Oct 2 20:50:08.562604 env[1130]: time="2023-10-02T20:50:08.562590398Z" level=warning msg="cleaning up after shim disconnected" id=6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c namespace=k8s.io Oct 2 20:50:08.562604 env[1130]: time="2023-10-02T20:50:08.562606331Z" level=info msg="cleaning up dead shim" Oct 2 20:50:08.574913 env[1130]: time="2023-10-02T20:50:08.574855335Z" level=warning msg="cleanup warnings time=\"2023-10-02T20:50:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2638 runtime=io.containerd.runc.v2\n" Oct 2 20:50:08.575327 env[1130]: time="2023-10-02T20:50:08.575284846Z" level=info msg="TearDown network for sandbox \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\" successfully" Oct 2 20:50:08.575461 env[1130]: time="2023-10-02T20:50:08.575323405Z" level=info msg="StopPodSandbox for \"6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c\" returns successfully" Oct 2 20:50:08.610230 systemd[1]: Removed slice kubepods-burstable-podf6c92125_d382_4849_aa97_42e67f5f17b0.slice. Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614118 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk46t\" (UniqueName: \"kubernetes.io/projected/348e0949-636b-42d3-8fc0-4cd87cc33691-kube-api-access-fk46t\") pod \"348e0949-636b-42d3-8fc0-4cd87cc33691\" (UID: \"348e0949-636b-42d3-8fc0-4cd87cc33691\") " Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614186 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/348e0949-636b-42d3-8fc0-4cd87cc33691-cilium-config-path\") pod \"348e0949-636b-42d3-8fc0-4cd87cc33691\" (UID: \"348e0949-636b-42d3-8fc0-4cd87cc33691\") " Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614229 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-kernel\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614266 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-clustermesh-secrets\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614288 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-xtables-lock\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614304 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-hubble-tls\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614321 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-host-proc-sys-net\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.614411 kubelet[1523]: I1002 20:50:08.614356 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-lib-modules\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614373 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-etc-cni-netd\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614391 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-5kb7q\" (UniqueName: \"kubernetes.io/projected/f6c92125-d382-4849-aa97-42e67f5f17b0-kube-api-access-5kb7q\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614423 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-bpf-maps\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614441 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cni-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614457 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-run\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614477 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-config-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614511 1523 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-ipsec-secrets\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615066 kubelet[1523]: I1002 20:50:08.614529 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-hostproc\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615513 kubelet[1523]: I1002 20:50:08.614546 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f6c92125-d382-4849-aa97-42e67f5f17b0-cilium-cgroup\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.615513 kubelet[1523]: W1002 20:50:08.614871 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/348e0949-636b-42d3-8fc0-4cd87cc33691/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 20:50:08.618736 kubelet[1523]: I1002 20:50:08.618682 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/348e0949-636b-42d3-8fc0-4cd87cc33691-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "348e0949-636b-42d3-8fc0-4cd87cc33691" (UID: "348e0949-636b-42d3-8fc0-4cd87cc33691"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 20:50:08.620916 kubelet[1523]: I1002 20:50:08.620865 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/348e0949-636b-42d3-8fc0-4cd87cc33691-kube-api-access-fk46t" (OuterVolumeSpecName: "kube-api-access-fk46t") pod "348e0949-636b-42d3-8fc0-4cd87cc33691" (UID: "348e0949-636b-42d3-8fc0-4cd87cc33691"). InnerVolumeSpecName "kube-api-access-fk46t". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 20:50:08.715424 kubelet[1523]: I1002 20:50:08.715373 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-fk46t\" (UniqueName: \"kubernetes.io/projected/348e0949-636b-42d3-8fc0-4cd87cc33691-kube-api-access-fk46t\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:08.715424 kubelet[1523]: I1002 20:50:08.715421 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/348e0949-636b-42d3-8fc0-4cd87cc33691-cilium-config-path\") on node \"10.128.0.25\" DevicePath \"\"" Oct 2 20:50:09.283976 kubelet[1523]: I1002 20:50:09.283944 1523 scope.go:115] "RemoveContainer" containerID="32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa" Oct 2 20:50:09.289699 systemd[1]: Removed slice kubepods-besteffort-pod348e0949_636b_42d3_8fc0_4cd87cc33691.slice. Oct 2 20:50:09.291407 env[1130]: time="2023-10-02T20:50:09.291360446Z" level=info msg="RemoveContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\"" Oct 2 20:50:09.297153 env[1130]: time="2023-10-02T20:50:09.297096474Z" level=info msg="RemoveContainer for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" returns successfully" Oct 2 20:50:09.297336 kubelet[1523]: I1002 20:50:09.297318 1523 scope.go:115] "RemoveContainer" containerID="32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa" Oct 2 20:50:09.297739 env[1130]: time="2023-10-02T20:50:09.297616294Z" level=error msg="ContainerStatus for \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\": not found" Oct 2 20:50:09.298029 kubelet[1523]: E1002 20:50:09.298006 1523 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\": not found" containerID="32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa" Oct 2 20:50:09.298224 kubelet[1523]: I1002 20:50:09.298189 1523 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa} err="failed to get container status \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\": rpc error: code = NotFound desc = an error occurred when try to find container \"32e6facc42322428aa6df233c0b6ed707a1ccd9d5d943610f638d251c891dbaa\": not found" Oct 2 20:50:09.298224 kubelet[1523]: I1002 20:50:09.298225 1523 scope.go:115] "RemoveContainer" containerID="0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815" Oct 2 20:50:09.299571 env[1130]: time="2023-10-02T20:50:09.299514591Z" level=info msg="RemoveContainer for \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\"" Oct 2 20:50:09.303298 env[1130]: time="2023-10-02T20:50:09.303250274Z" level=info msg="RemoveContainer for \"0786bf0c65dce7db236c6b74a80b9cad90f38f18f9d714de12534cd81692f815\" returns successfully" Oct 2 20:50:09.355856 systemd[1]: var-lib-kubelet-pods-f6c92125\x2dd382\x2d4849\x2daa97\x2d42e67f5f17b0-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 20:50:09.356019 systemd[1]: var-lib-kubelet-pods-f6c92125\x2dd382\x2d4849\x2daa97\x2d42e67f5f17b0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 20:50:09.356131 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6f9fc8d4e4182f953c65e8715ff66e66b86aaf13ef351485240123910c638a4c-rootfs.mount: Deactivated successfully. Oct 2 20:50:09.356246 systemd[1]: var-lib-kubelet-pods-f6c92125\x2dd382\x2d4849\x2daa97\x2d42e67f5f17b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5kb7q.mount: Deactivated successfully. Oct 2 20:50:09.356374 systemd[1]: var-lib-kubelet-pods-348e0949\x2d636b\x2d42d3\x2d8fc0\x2d4cd87cc33691-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfk46t.mount: Deactivated successfully. Oct 2 20:50:09.515501 kubelet[1523]: E1002 20:50:09.515439 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:10.516574 kubelet[1523]: E1002 20:50:10.516502 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:10.602305 kubelet[1523]: I1002 20:50:10.602271 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=348e0949-636b-42d3-8fc0-4cd87cc33691 path="/var/lib/kubelet/pods/348e0949-636b-42d3-8fc0-4cd87cc33691/volumes" Oct 2 20:50:10.603257 kubelet[1523]: I1002 20:50:10.603230 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=f6c92125-d382-4849-aa97-42e67f5f17b0 path="/var/lib/kubelet/pods/f6c92125-d382-4849-aa97-42e67f5f17b0/volumes" Oct 2 20:50:11.516800 kubelet[1523]: E1002 20:50:11.516670 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:12.320352 kubelet[1523]: E1002 20:50:12.320286 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 20:50:12.491355 kubelet[1523]: E1002 20:50:12.491318 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 20:50:12.517657 kubelet[1523]: E1002 20:50:12.517584 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"