Oct 2 19:28:13.092486 kernel: Linux version 5.15.132-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Oct 2 17:52:37 -00 2023 Oct 2 19:28:13.092526 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:28:13.092543 kernel: BIOS-provided physical RAM map: Oct 2 19:28:13.092556 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Oct 2 19:28:13.092566 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Oct 2 19:28:13.092578 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Oct 2 19:28:13.092598 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Oct 2 19:28:13.092613 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Oct 2 19:28:13.092626 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Oct 2 19:28:13.092640 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Oct 2 19:28:13.092654 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Oct 2 19:28:13.092667 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Oct 2 19:28:13.092680 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Oct 2 19:28:13.092693 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Oct 2 19:28:13.092714 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Oct 2 19:28:13.092729 kernel: NX (Execute Disable) protection: active Oct 2 19:28:13.092743 kernel: efi: EFI v2.70 by EDK II Oct 2 19:28:13.092759 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9ca000 MEMATTR=0xbe386218 RNG=0xbfb73018 TPMEventLog=0xbe2c8018 Oct 2 19:28:13.092774 kernel: random: crng init done Oct 2 19:28:13.092789 kernel: SMBIOS 2.4 present. Oct 2 19:28:13.092804 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 08/04/2023 Oct 2 19:28:13.092836 kernel: Hypervisor detected: KVM Oct 2 19:28:13.092855 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Oct 2 19:28:13.092868 kernel: kvm-clock: cpu 0, msr 181f8a001, primary cpu clock Oct 2 19:28:13.092881 kernel: kvm-clock: using sched offset of 12757899311 cycles Oct 2 19:28:13.092894 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Oct 2 19:28:13.092910 kernel: tsc: Detected 2299.998 MHz processor Oct 2 19:28:13.092925 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Oct 2 19:28:13.092940 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Oct 2 19:28:13.092953 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Oct 2 19:28:13.092967 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Oct 2 19:28:13.092981 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Oct 2 19:28:13.092999 kernel: Using GB pages for direct mapping Oct 2 19:28:13.093013 kernel: Secure boot disabled Oct 2 19:28:13.093027 kernel: ACPI: Early table checksum verification disabled Oct 2 19:28:13.093042 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Oct 2 19:28:13.093056 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Oct 2 19:28:13.093070 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Oct 2 19:28:13.093083 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Oct 2 19:28:13.093097 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Oct 2 19:28:13.093123 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Oct 2 19:28:13.093139 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Oct 2 19:28:13.093155 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Oct 2 19:28:13.093170 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Oct 2 19:28:13.093194 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Oct 2 19:28:13.093210 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Oct 2 19:28:13.093231 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Oct 2 19:28:13.093247 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Oct 2 19:28:13.093264 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Oct 2 19:28:13.093280 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Oct 2 19:28:13.093297 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Oct 2 19:28:13.093313 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Oct 2 19:28:13.093329 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Oct 2 19:28:13.093345 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Oct 2 19:28:13.093360 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Oct 2 19:28:13.093378 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Oct 2 19:28:13.093393 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Oct 2 19:28:13.093408 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Oct 2 19:28:13.093423 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Oct 2 19:28:13.093438 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Oct 2 19:28:13.093455 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Oct 2 19:28:13.093470 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Oct 2 19:28:13.093486 kernel: NODE_DATA(0) allocated [mem 0x21fffa000-0x21fffffff] Oct 2 19:28:13.093503 kernel: Zone ranges: Oct 2 19:28:13.093522 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Oct 2 19:28:13.093539 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Oct 2 19:28:13.093555 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Oct 2 19:28:13.093571 kernel: Movable zone start for each node Oct 2 19:28:13.093587 kernel: Early memory node ranges Oct 2 19:28:13.093602 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Oct 2 19:28:13.093638 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Oct 2 19:28:13.093654 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Oct 2 19:28:13.093670 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Oct 2 19:28:13.093690 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Oct 2 19:28:13.093706 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Oct 2 19:28:13.093722 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Oct 2 19:28:13.093739 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Oct 2 19:28:13.093755 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Oct 2 19:28:13.093772 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Oct 2 19:28:13.093789 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Oct 2 19:28:13.093805 kernel: ACPI: PM-Timer IO Port: 0xb008 Oct 2 19:28:13.093854 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Oct 2 19:28:13.093875 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Oct 2 19:28:13.093891 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Oct 2 19:28:13.093907 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Oct 2 19:28:13.093924 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Oct 2 19:28:13.093941 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Oct 2 19:28:13.093957 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Oct 2 19:28:13.093974 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Oct 2 19:28:13.093990 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Oct 2 19:28:13.094017 kernel: Booting paravirtualized kernel on KVM Oct 2 19:28:13.094037 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Oct 2 19:28:13.094053 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Oct 2 19:28:13.094069 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Oct 2 19:28:13.094086 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Oct 2 19:28:13.094101 kernel: pcpu-alloc: [0] 0 1 Oct 2 19:28:13.094117 kernel: kvm-guest: PV spinlocks enabled Oct 2 19:28:13.094134 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Oct 2 19:28:13.094151 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1931256 Oct 2 19:28:13.094167 kernel: Policy zone: Normal Oct 2 19:28:13.094196 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:28:13.094212 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Oct 2 19:28:13.094227 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Oct 2 19:28:13.094243 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Oct 2 19:28:13.094260 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Oct 2 19:28:13.094277 kernel: Memory: 7536584K/7860584K available (12294K kernel code, 2274K rwdata, 13692K rodata, 45372K init, 4176K bss, 323740K reserved, 0K cma-reserved) Oct 2 19:28:13.094294 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Oct 2 19:28:13.094310 kernel: Kernel/User page tables isolation: enabled Oct 2 19:28:13.094331 kernel: ftrace: allocating 34453 entries in 135 pages Oct 2 19:28:13.094347 kernel: ftrace: allocated 135 pages with 4 groups Oct 2 19:28:13.094363 kernel: rcu: Hierarchical RCU implementation. Oct 2 19:28:13.094380 kernel: rcu: RCU event tracing is enabled. Oct 2 19:28:13.094397 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Oct 2 19:28:13.094414 kernel: Rude variant of Tasks RCU enabled. Oct 2 19:28:13.094431 kernel: Tracing variant of Tasks RCU enabled. Oct 2 19:28:13.094448 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Oct 2 19:28:13.094464 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Oct 2 19:28:13.094485 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Oct 2 19:28:13.094514 kernel: Console: colour dummy device 80x25 Oct 2 19:28:13.094532 kernel: printk: console [ttyS0] enabled Oct 2 19:28:13.094551 kernel: ACPI: Core revision 20210730 Oct 2 19:28:13.094567 kernel: APIC: Switch to symmetric I/O mode setup Oct 2 19:28:13.094584 kernel: x2apic enabled Oct 2 19:28:13.094601 kernel: Switched APIC routing to physical x2apic. Oct 2 19:28:13.094618 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Oct 2 19:28:13.094636 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 19:28:13.094654 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Oct 2 19:28:13.094675 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Oct 2 19:28:13.094692 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Oct 2 19:28:13.094709 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Oct 2 19:28:13.094739 kernel: Spectre V2 : Mitigation: IBRS Oct 2 19:28:13.094756 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Oct 2 19:28:13.094774 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Oct 2 19:28:13.094795 kernel: RETBleed: Mitigation: IBRS Oct 2 19:28:13.094848 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Oct 2 19:28:13.094866 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Oct 2 19:28:13.094884 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Oct 2 19:28:13.094901 kernel: MDS: Mitigation: Clear CPU buffers Oct 2 19:28:13.094918 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Oct 2 19:28:13.094933 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Oct 2 19:28:13.094949 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Oct 2 19:28:13.094966 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Oct 2 19:28:13.094988 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Oct 2 19:28:13.095006 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Oct 2 19:28:13.095034 kernel: Freeing SMP alternatives memory: 32K Oct 2 19:28:13.095058 kernel: pid_max: default: 32768 minimum: 301 Oct 2 19:28:13.095075 kernel: LSM: Security Framework initializing Oct 2 19:28:13.095093 kernel: SELinux: Initializing. Oct 2 19:28:13.095117 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:28:13.095135 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Oct 2 19:28:13.095153 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Oct 2 19:28:13.095175 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Oct 2 19:28:13.095199 kernel: signal: max sigframe size: 1776 Oct 2 19:28:13.095217 kernel: rcu: Hierarchical SRCU implementation. Oct 2 19:28:13.095234 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Oct 2 19:28:13.095251 kernel: smp: Bringing up secondary CPUs ... Oct 2 19:28:13.095269 kernel: x86: Booting SMP configuration: Oct 2 19:28:13.095286 kernel: .... node #0, CPUs: #1 Oct 2 19:28:13.095304 kernel: kvm-clock: cpu 1, msr 181f8a041, secondary cpu clock Oct 2 19:28:13.095322 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Oct 2 19:28:13.095344 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Oct 2 19:28:13.095361 kernel: smp: Brought up 1 node, 2 CPUs Oct 2 19:28:13.095378 kernel: smpboot: Max logical packages: 1 Oct 2 19:28:13.095396 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Oct 2 19:28:13.095414 kernel: devtmpfs: initialized Oct 2 19:28:13.095431 kernel: x86/mm: Memory block size: 128MB Oct 2 19:28:13.095449 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Oct 2 19:28:13.095467 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Oct 2 19:28:13.095485 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Oct 2 19:28:13.095507 kernel: pinctrl core: initialized pinctrl subsystem Oct 2 19:28:13.095524 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Oct 2 19:28:13.095541 kernel: audit: initializing netlink subsys (disabled) Oct 2 19:28:13.095565 kernel: audit: type=2000 audit(1696274891.851:1): state=initialized audit_enabled=0 res=1 Oct 2 19:28:13.095582 kernel: thermal_sys: Registered thermal governor 'step_wise' Oct 2 19:28:13.095600 kernel: thermal_sys: Registered thermal governor 'user_space' Oct 2 19:28:13.095618 kernel: cpuidle: using governor menu Oct 2 19:28:13.095635 kernel: ACPI: bus type PCI registered Oct 2 19:28:13.095652 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Oct 2 19:28:13.095674 kernel: dca service started, version 1.12.1 Oct 2 19:28:13.095691 kernel: PCI: Using configuration type 1 for base access Oct 2 19:28:13.095709 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Oct 2 19:28:13.095727 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Oct 2 19:28:13.095744 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Oct 2 19:28:13.095762 kernel: ACPI: Added _OSI(Module Device) Oct 2 19:28:13.095779 kernel: ACPI: Added _OSI(Processor Device) Oct 2 19:28:13.095796 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Oct 2 19:28:13.095827 kernel: ACPI: Added _OSI(Processor Aggregator Device) Oct 2 19:28:13.095848 kernel: ACPI: Added _OSI(Linux-Dell-Video) Oct 2 19:28:13.095865 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Oct 2 19:28:13.095882 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Oct 2 19:28:13.095899 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Oct 2 19:28:13.095917 kernel: ACPI: Interpreter enabled Oct 2 19:28:13.095935 kernel: ACPI: PM: (supports S0 S3 S5) Oct 2 19:28:13.095953 kernel: ACPI: Using IOAPIC for interrupt routing Oct 2 19:28:13.095970 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Oct 2 19:28:13.095988 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Oct 2 19:28:13.096010 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Oct 2 19:28:13.096247 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Oct 2 19:28:13.096419 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Oct 2 19:28:13.096443 kernel: PCI host bridge to bus 0000:00 Oct 2 19:28:13.096602 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Oct 2 19:28:13.096753 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Oct 2 19:28:13.096922 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Oct 2 19:28:13.097071 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Oct 2 19:28:13.097237 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Oct 2 19:28:13.097424 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Oct 2 19:28:13.097637 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Oct 2 19:28:13.097854 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Oct 2 19:28:13.098020 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Oct 2 19:28:13.098196 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Oct 2 19:28:13.098362 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Oct 2 19:28:13.098515 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Oct 2 19:28:13.098679 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Oct 2 19:28:13.098866 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Oct 2 19:28:13.099029 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Oct 2 19:28:13.099209 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Oct 2 19:28:13.111294 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Oct 2 19:28:13.111505 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Oct 2 19:28:13.111531 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Oct 2 19:28:13.111547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Oct 2 19:28:13.111563 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Oct 2 19:28:13.111579 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Oct 2 19:28:13.111595 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Oct 2 19:28:13.111618 kernel: iommu: Default domain type: Translated Oct 2 19:28:13.111635 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Oct 2 19:28:13.111652 kernel: vgaarb: loaded Oct 2 19:28:13.111669 kernel: pps_core: LinuxPPS API ver. 1 registered Oct 2 19:28:13.111685 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Oct 2 19:28:13.111703 kernel: PTP clock support registered Oct 2 19:28:13.111720 kernel: Registered efivars operations Oct 2 19:28:13.111735 kernel: PCI: Using ACPI for IRQ routing Oct 2 19:28:13.111750 kernel: PCI: pci_cache_line_size set to 64 bytes Oct 2 19:28:13.111769 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Oct 2 19:28:13.111784 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Oct 2 19:28:13.111800 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Oct 2 19:28:13.111845 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Oct 2 19:28:13.111859 kernel: clocksource: Switched to clocksource kvm-clock Oct 2 19:28:13.111874 kernel: VFS: Disk quotas dquot_6.6.0 Oct 2 19:28:13.111890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Oct 2 19:28:13.111906 kernel: pnp: PnP ACPI init Oct 2 19:28:13.111923 kernel: pnp: PnP ACPI: found 7 devices Oct 2 19:28:13.111945 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Oct 2 19:28:13.111963 kernel: NET: Registered PF_INET protocol family Oct 2 19:28:13.111980 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Oct 2 19:28:13.111998 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Oct 2 19:28:13.112016 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Oct 2 19:28:13.112034 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Oct 2 19:28:13.112052 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Oct 2 19:28:13.112069 kernel: TCP: Hash tables configured (established 65536 bind 65536) Oct 2 19:28:13.112085 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:28:13.112105 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Oct 2 19:28:13.112120 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Oct 2 19:28:13.112137 kernel: NET: Registered PF_XDP protocol family Oct 2 19:28:13.112331 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Oct 2 19:28:13.112484 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Oct 2 19:28:13.112632 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Oct 2 19:28:13.112778 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Oct 2 19:28:13.112970 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Oct 2 19:28:13.112999 kernel: PCI: CLS 0 bytes, default 64 Oct 2 19:28:13.113017 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Oct 2 19:28:13.113035 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Oct 2 19:28:13.113051 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Oct 2 19:28:13.113069 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Oct 2 19:28:13.113086 kernel: clocksource: Switched to clocksource tsc Oct 2 19:28:13.113103 kernel: Initialise system trusted keyrings Oct 2 19:28:13.113120 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Oct 2 19:28:13.113141 kernel: Key type asymmetric registered Oct 2 19:28:13.113158 kernel: Asymmetric key parser 'x509' registered Oct 2 19:28:13.113175 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Oct 2 19:28:13.113200 kernel: io scheduler mq-deadline registered Oct 2 19:28:13.113217 kernel: io scheduler kyber registered Oct 2 19:28:13.113234 kernel: io scheduler bfq registered Oct 2 19:28:13.113251 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Oct 2 19:28:13.113269 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Oct 2 19:28:13.113444 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Oct 2 19:28:13.113472 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Oct 2 19:28:13.113636 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Oct 2 19:28:13.113659 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Oct 2 19:28:13.113837 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Oct 2 19:28:13.113861 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Oct 2 19:28:13.113878 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Oct 2 19:28:13.113896 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Oct 2 19:28:13.113913 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Oct 2 19:28:13.113929 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Oct 2 19:28:13.114099 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Oct 2 19:28:13.114123 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Oct 2 19:28:13.114140 kernel: i8042: Warning: Keylock active Oct 2 19:28:13.114156 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Oct 2 19:28:13.114174 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Oct 2 19:28:13.114376 kernel: rtc_cmos 00:00: RTC can wake from S4 Oct 2 19:28:13.114537 kernel: rtc_cmos 00:00: registered as rtc0 Oct 2 19:28:13.114685 kernel: rtc_cmos 00:00: setting system clock to 2023-10-02T19:28:12 UTC (1696274892) Oct 2 19:28:13.114873 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Oct 2 19:28:13.114895 kernel: intel_pstate: CPU model not supported Oct 2 19:28:13.114912 kernel: pstore: Registered efi as persistent store backend Oct 2 19:28:13.114929 kernel: NET: Registered PF_INET6 protocol family Oct 2 19:28:13.114945 kernel: Segment Routing with IPv6 Oct 2 19:28:13.114962 kernel: In-situ OAM (IOAM) with IPv6 Oct 2 19:28:13.114978 kernel: NET: Registered PF_PACKET protocol family Oct 2 19:28:13.114995 kernel: Key type dns_resolver registered Oct 2 19:28:13.115017 kernel: IPI shorthand broadcast: enabled Oct 2 19:28:13.115033 kernel: sched_clock: Marking stable (724381645, 182167352)->(987442919, -80893922) Oct 2 19:28:13.115050 kernel: registered taskstats version 1 Oct 2 19:28:13.115067 kernel: Loading compiled-in X.509 certificates Oct 2 19:28:13.115084 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Oct 2 19:28:13.115101 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.132-flatcar: 6f9e51af8b3ef67eb6e93ecfe77d55665ad3d861' Oct 2 19:28:13.115116 kernel: Key type .fscrypt registered Oct 2 19:28:13.115132 kernel: Key type fscrypt-provisioning registered Oct 2 19:28:13.115149 kernel: pstore: Using crash dump compression: deflate Oct 2 19:28:13.115169 kernel: ima: Allocated hash algorithm: sha1 Oct 2 19:28:13.115193 kernel: ima: No architecture policies found Oct 2 19:28:13.115210 kernel: Freeing unused kernel image (initmem) memory: 45372K Oct 2 19:28:13.115227 kernel: Write protecting the kernel read-only data: 28672k Oct 2 19:28:13.115243 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Oct 2 19:28:13.115260 kernel: Freeing unused kernel image (rodata/data gap) memory: 644K Oct 2 19:28:13.115277 kernel: Run /init as init process Oct 2 19:28:13.115293 kernel: with arguments: Oct 2 19:28:13.115314 kernel: /init Oct 2 19:28:13.115330 kernel: with environment: Oct 2 19:28:13.115347 kernel: HOME=/ Oct 2 19:28:13.115363 kernel: TERM=linux Oct 2 19:28:13.115380 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Oct 2 19:28:13.115401 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:28:13.115421 systemd[1]: Detected virtualization kvm. Oct 2 19:28:13.115443 systemd[1]: Detected architecture x86-64. Oct 2 19:28:13.115460 systemd[1]: Running in initrd. Oct 2 19:28:13.115477 systemd[1]: No hostname configured, using default hostname. Oct 2 19:28:13.115495 systemd[1]: Hostname set to . Oct 2 19:28:13.115512 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:28:13.115529 systemd[1]: Queued start job for default target initrd.target. Oct 2 19:28:13.115546 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:28:13.115564 systemd[1]: Reached target cryptsetup.target. Oct 2 19:28:13.115582 systemd[1]: Reached target paths.target. Oct 2 19:28:13.115603 systemd[1]: Reached target slices.target. Oct 2 19:28:13.115621 systemd[1]: Reached target swap.target. Oct 2 19:28:13.115639 systemd[1]: Reached target timers.target. Oct 2 19:28:13.115658 systemd[1]: Listening on iscsid.socket. Oct 2 19:28:13.115676 systemd[1]: Listening on iscsiuio.socket. Oct 2 19:28:13.115692 systemd[1]: Listening on systemd-journald-audit.socket. Oct 2 19:28:13.115710 systemd[1]: Listening on systemd-journald-dev-log.socket. Oct 2 19:28:13.115731 systemd[1]: Listening on systemd-journald.socket. Oct 2 19:28:13.115749 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:28:13.115767 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:28:13.115784 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:28:13.115803 systemd[1]: Reached target sockets.target. Oct 2 19:28:13.115841 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:28:13.115857 systemd[1]: Finished network-cleanup.service. Oct 2 19:28:13.115872 systemd[1]: Starting systemd-fsck-usr.service... Oct 2 19:28:13.115888 systemd[1]: Starting systemd-journald.service... Oct 2 19:28:13.115909 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:28:13.115926 systemd[1]: Starting systemd-resolved.service... Oct 2 19:28:13.115943 systemd[1]: Starting systemd-vconsole-setup.service... Oct 2 19:28:13.115988 systemd-journald[190]: Journal started Oct 2 19:28:13.116084 systemd-journald[190]: Runtime Journal (/run/log/journal/e652fbe180bc2201784a1b29740206da) is 8.0M, max 148.8M, 140.8M free. Oct 2 19:28:13.117868 systemd[1]: Started systemd-journald.service. Oct 2 19:28:13.118059 systemd-modules-load[191]: Inserted module 'overlay' Oct 2 19:28:13.124971 kernel: audit: type=1130 audit(1696274893.119:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.121748 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:28:13.132982 kernel: audit: type=1130 audit(1696274893.127:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.127000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.129773 systemd[1]: Finished systemd-fsck-usr.service. Oct 2 19:28:13.141969 kernel: audit: type=1130 audit(1696274893.135:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.137917 systemd[1]: Finished systemd-vconsole-setup.service. Oct 2 19:28:13.155117 kernel: audit: type=1130 audit(1696274893.144:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.148008 systemd[1]: Starting dracut-cmdline-ask.service... Oct 2 19:28:13.162240 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Oct 2 19:28:13.162952 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Oct 2 19:28:13.178842 kernel: Bridge firewalling registered Oct 2 19:28:13.184862 systemd-modules-load[191]: Inserted module 'br_netfilter' Oct 2 19:28:13.186188 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Oct 2 19:28:13.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.190932 kernel: audit: type=1130 audit(1696274893.184:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.191862 systemd-resolved[192]: Positive Trust Anchors: Oct 2 19:28:13.191880 systemd-resolved[192]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:28:13.191934 systemd-resolved[192]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:28:13.202367 systemd[1]: Finished dracut-cmdline-ask.service. Oct 2 19:28:13.208966 kernel: audit: type=1130 audit(1696274893.200:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.200000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.205168 systemd[1]: Starting dracut-cmdline.service... Oct 2 19:28:13.209764 systemd-resolved[192]: Defaulting to hostname 'linux'. Oct 2 19:28:13.212875 systemd[1]: Started systemd-resolved.service. Oct 2 19:28:13.224969 kernel: SCSI subsystem initialized Oct 2 19:28:13.225724 dracut-cmdline[205]: dracut-dracut-053 Oct 2 19:28:13.235965 kernel: audit: type=1130 audit(1696274893.227:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.236077 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=96b0fdb9f11bf1422adc9955c78c8182df387766badfd0b94e08fb9688739ee1 Oct 2 19:28:13.229626 systemd[1]: Reached target nss-lookup.target. Oct 2 19:28:13.250863 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Oct 2 19:28:13.250935 kernel: device-mapper: uevent: version 1.0.3 Oct 2 19:28:13.252280 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Oct 2 19:28:13.257205 systemd-modules-load[191]: Inserted module 'dm_multipath' Oct 2 19:28:13.258333 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:28:13.273844 kernel: audit: type=1130 audit(1696274893.269:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.269000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.274088 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:28:13.286349 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:28:13.296986 kernel: audit: type=1130 audit(1696274893.288:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.327848 kernel: Loading iSCSI transport class v2.0-870. Oct 2 19:28:13.340860 kernel: iscsi: registered transport (tcp) Oct 2 19:28:13.365158 kernel: iscsi: registered transport (qla4xxx) Oct 2 19:28:13.365253 kernel: QLogic iSCSI HBA Driver Oct 2 19:28:13.412063 systemd[1]: Finished dracut-cmdline.service. Oct 2 19:28:13.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.414296 systemd[1]: Starting dracut-pre-udev.service... Oct 2 19:28:13.471878 kernel: raid6: avx2x4 gen() 18178 MB/s Oct 2 19:28:13.488855 kernel: raid6: avx2x4 xor() 7853 MB/s Oct 2 19:28:13.505857 kernel: raid6: avx2x2 gen() 18172 MB/s Oct 2 19:28:13.522859 kernel: raid6: avx2x2 xor() 18532 MB/s Oct 2 19:28:13.539847 kernel: raid6: avx2x1 gen() 14228 MB/s Oct 2 19:28:13.556869 kernel: raid6: avx2x1 xor() 16108 MB/s Oct 2 19:28:13.573859 kernel: raid6: sse2x4 gen() 11078 MB/s Oct 2 19:28:13.590856 kernel: raid6: sse2x4 xor() 6621 MB/s Oct 2 19:28:13.607859 kernel: raid6: sse2x2 gen() 12052 MB/s Oct 2 19:28:13.624854 kernel: raid6: sse2x2 xor() 7450 MB/s Oct 2 19:28:13.641854 kernel: raid6: sse2x1 gen() 10584 MB/s Oct 2 19:28:13.659208 kernel: raid6: sse2x1 xor() 5176 MB/s Oct 2 19:28:13.659254 kernel: raid6: using algorithm avx2x4 gen() 18178 MB/s Oct 2 19:28:13.659276 kernel: raid6: .... xor() 7853 MB/s, rmw enabled Oct 2 19:28:13.659933 kernel: raid6: using avx2x2 recovery algorithm Oct 2 19:28:13.674852 kernel: xor: automatically using best checksumming function avx Oct 2 19:28:13.778852 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Oct 2 19:28:13.790766 systemd[1]: Finished dracut-pre-udev.service. Oct 2 19:28:13.789000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.790000 audit: BPF prog-id=7 op=LOAD Oct 2 19:28:13.790000 audit: BPF prog-id=8 op=LOAD Oct 2 19:28:13.793064 systemd[1]: Starting systemd-udevd.service... Oct 2 19:28:13.810349 systemd-udevd[387]: Using default interface naming scheme 'v252'. Oct 2 19:28:13.817893 systemd[1]: Started systemd-udevd.service. Oct 2 19:28:13.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.823310 systemd[1]: Starting dracut-pre-trigger.service... Oct 2 19:28:13.846139 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Oct 2 19:28:13.886340 systemd[1]: Finished dracut-pre-trigger.service. Oct 2 19:28:13.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:13.891612 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:28:13.957428 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:28:13.959000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:14.032841 kernel: cryptd: max_cpu_qlen set to 1000 Oct 2 19:28:14.092293 kernel: AVX2 version of gcm_enc/dec engaged. Oct 2 19:28:14.092386 kernel: AES CTR mode by8 optimization enabled Oct 2 19:28:14.093537 kernel: scsi host0: Virtio SCSI HBA Oct 2 19:28:14.124266 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Oct 2 19:28:14.205099 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Oct 2 19:28:14.205436 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Oct 2 19:28:14.205637 kernel: sd 0:0:1:0: [sda] Write Protect is off Oct 2 19:28:14.206071 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Oct 2 19:28:14.206292 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Oct 2 19:28:14.215133 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Oct 2 19:28:14.215226 kernel: GPT:17805311 != 25165823 Oct 2 19:28:14.215250 kernel: GPT:Alternate GPT header not at the end of the disk. Oct 2 19:28:14.216107 kernel: GPT:17805311 != 25165823 Oct 2 19:28:14.216877 kernel: GPT: Use GNU Parted to correct GPT errors. Oct 2 19:28:14.219036 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:28:14.224544 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Oct 2 19:28:14.264849 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (438) Oct 2 19:28:14.282101 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Oct 2 19:28:14.295560 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:28:14.308104 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Oct 2 19:28:14.316263 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Oct 2 19:28:14.316484 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Oct 2 19:28:14.327267 systemd[1]: Starting disk-uuid.service... Oct 2 19:28:14.338737 disk-uuid[508]: Primary Header is updated. Oct 2 19:28:14.338737 disk-uuid[508]: Secondary Entries is updated. Oct 2 19:28:14.338737 disk-uuid[508]: Secondary Header is updated. Oct 2 19:28:14.352852 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:28:14.368869 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:28:14.376839 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:28:15.379513 disk-uuid[509]: The operation has completed successfully. Oct 2 19:28:15.387989 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Oct 2 19:28:15.447687 systemd[1]: disk-uuid.service: Deactivated successfully. Oct 2 19:28:15.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.446000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.447838 systemd[1]: Finished disk-uuid.service. Oct 2 19:28:15.459032 systemd[1]: Starting verity-setup.service... Oct 2 19:28:15.487885 kernel: device-mapper: verity: sha256 using implementation "sha256-generic" Oct 2 19:28:15.570404 systemd[1]: Found device dev-mapper-usr.device. Oct 2 19:28:15.573005 systemd[1]: Mounting sysusr-usr.mount... Oct 2 19:28:15.592505 systemd[1]: Finished verity-setup.service. Oct 2 19:28:15.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.680841 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:28:15.681363 systemd[1]: Mounted sysusr-usr.mount. Oct 2 19:28:15.688223 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Oct 2 19:28:15.734996 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:28:15.735039 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:28:15.735062 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:28:15.689237 systemd[1]: Starting ignition-setup.service... Oct 2 19:28:15.757168 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:28:15.704251 systemd[1]: Starting parse-ip-for-networkd.service... Oct 2 19:28:15.766197 systemd[1]: mnt-oem.mount: Deactivated successfully. Oct 2 19:28:15.786682 systemd[1]: Finished ignition-setup.service. Oct 2 19:28:15.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.788006 systemd[1]: Starting ignition-fetch-offline.service... Oct 2 19:28:15.826825 systemd[1]: Finished parse-ip-for-networkd.service. Oct 2 19:28:15.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.826000 audit: BPF prog-id=9 op=LOAD Oct 2 19:28:15.829112 systemd[1]: Starting systemd-networkd.service... Oct 2 19:28:15.863152 systemd-networkd[683]: lo: Link UP Oct 2 19:28:15.863165 systemd-networkd[683]: lo: Gained carrier Oct 2 19:28:15.864055 systemd-networkd[683]: Enumeration completed Oct 2 19:28:15.882000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.864202 systemd[1]: Started systemd-networkd.service. Oct 2 19:28:15.864603 systemd-networkd[683]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:28:15.867089 systemd-networkd[683]: eth0: Link UP Oct 2 19:28:15.867097 systemd-networkd[683]: eth0: Gained carrier Oct 2 19:28:15.881056 systemd-networkd[683]: eth0: DHCPv4 address 10.128.0.55/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 19:28:15.884234 systemd[1]: Reached target network.target. Oct 2 19:28:15.907278 systemd[1]: Starting iscsiuio.service... Oct 2 19:28:15.965120 systemd[1]: Started iscsiuio.service. Oct 2 19:28:15.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.973504 systemd[1]: Starting iscsid.service... Oct 2 19:28:15.993024 iscsid[693]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:28:15.993024 iscsid[693]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Oct 2 19:28:15.993024 iscsid[693]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Oct 2 19:28:15.993024 iscsid[693]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Oct 2 19:28:15.993024 iscsid[693]: If using hardware iscsi like qla4xxx this message can be ignored. Oct 2 19:28:15.993024 iscsid[693]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Oct 2 19:28:15.993024 iscsid[693]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Oct 2 19:28:15.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:15.986224 systemd[1]: Started iscsid.service. Oct 2 19:28:16.087354 ignition[653]: Ignition 2.14.0 Oct 2 19:28:16.001694 systemd[1]: Starting dracut-initqueue.service... Oct 2 19:28:16.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.087368 ignition[653]: Stage: fetch-offline Oct 2 19:28:16.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.021646 systemd[1]: Finished dracut-initqueue.service. Oct 2 19:28:16.087459 ignition[653]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:16.044388 systemd[1]: Reached target remote-fs-pre.target. Oct 2 19:28:16.087499 ignition[653]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:16.086977 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:28:16.112224 ignition[653]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:16.096161 systemd[1]: Reached target remote-fs.target. Oct 2 19:28:16.112452 ignition[653]: parsed url from cmdline: "" Oct 2 19:28:16.128661 systemd[1]: Starting dracut-pre-mount.service... Oct 2 19:28:16.112460 ignition[653]: no config URL provided Oct 2 19:28:16.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.152465 systemd[1]: Finished ignition-fetch-offline.service. Oct 2 19:28:16.112468 ignition[653]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:28:16.166336 systemd[1]: Finished dracut-pre-mount.service. Oct 2 19:28:16.112480 ignition[653]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:28:16.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.184488 systemd[1]: Starting ignition-fetch.service... Oct 2 19:28:16.112489 ignition[653]: failed to fetch config: resource requires networking Oct 2 19:28:16.232238 unknown[708]: fetched base config from "system" Oct 2 19:28:16.112876 ignition[653]: Ignition finished successfully Oct 2 19:28:16.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.232246 unknown[708]: fetched base config from "system" Oct 2 19:28:16.196479 ignition[708]: Ignition 2.14.0 Oct 2 19:28:16.232255 unknown[708]: fetched user config from "gcp" Oct 2 19:28:16.196488 ignition[708]: Stage: fetch Oct 2 19:28:16.248469 systemd[1]: Finished ignition-fetch.service. Oct 2 19:28:16.196624 ignition[708]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:16.265309 systemd[1]: Starting ignition-kargs.service... Oct 2 19:28:16.196649 ignition[708]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:16.291442 systemd[1]: Finished ignition-kargs.service. Oct 2 19:28:16.206733 ignition[708]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:16.313576 systemd[1]: Starting ignition-disks.service... Oct 2 19:28:16.207971 ignition[708]: parsed url from cmdline: "" Oct 2 19:28:16.342350 systemd[1]: Finished ignition-disks.service. Oct 2 19:28:16.207981 ignition[708]: no config URL provided Oct 2 19:28:16.357277 systemd[1]: Reached target initrd-root-device.target. Oct 2 19:28:16.207998 ignition[708]: reading system config file "/usr/lib/ignition/user.ign" Oct 2 19:28:16.373043 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:28:16.208022 ignition[708]: no config at "/usr/lib/ignition/user.ign" Oct 2 19:28:16.386010 systemd[1]: Reached target local-fs.target. Oct 2 19:28:16.208085 ignition[708]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Oct 2 19:28:16.398003 systemd[1]: Reached target sysinit.target. Oct 2 19:28:16.214867 ignition[708]: GET result: OK Oct 2 19:28:16.411021 systemd[1]: Reached target basic.target. Oct 2 19:28:16.214964 ignition[708]: parsing config with SHA512: 8f0156716f0d295e6ee20478cd11de439b5c4079d1f951825955b354007d113524dd3bd253a2a965cea756b2e5ecf07c4e95b556d386839b908e539229dca532 Oct 2 19:28:16.423312 systemd[1]: Starting systemd-fsck-root.service... Oct 2 19:28:16.232789 ignition[708]: fetch: fetch complete Oct 2 19:28:16.232795 ignition[708]: fetch: fetch passed Oct 2 19:28:16.232868 ignition[708]: Ignition finished successfully Oct 2 19:28:16.278588 ignition[714]: Ignition 2.14.0 Oct 2 19:28:16.278598 ignition[714]: Stage: kargs Oct 2 19:28:16.278731 ignition[714]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:16.278764 ignition[714]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:16.286643 ignition[714]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:16.288700 ignition[714]: kargs: kargs passed Oct 2 19:28:16.288764 ignition[714]: Ignition finished successfully Oct 2 19:28:16.324679 ignition[720]: Ignition 2.14.0 Oct 2 19:28:16.324688 ignition[720]: Stage: disks Oct 2 19:28:16.324857 ignition[720]: reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:16.324888 ignition[720]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:16.334703 ignition[720]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:16.336086 ignition[720]: disks: disks passed Oct 2 19:28:16.336140 ignition[720]: Ignition finished successfully Oct 2 19:28:16.470703 systemd-fsck[728]: ROOT: clean, 603/1628000 files, 124049/1617920 blocks Oct 2 19:28:16.635746 systemd[1]: Finished systemd-fsck-root.service. Oct 2 19:28:16.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.637003 systemd[1]: Mounting sysroot.mount... Oct 2 19:28:16.674093 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:28:16.668145 systemd[1]: Mounted sysroot.mount. Oct 2 19:28:16.681212 systemd[1]: Reached target initrd-root-fs.target. Oct 2 19:28:16.702164 systemd[1]: Mounting sysroot-usr.mount... Oct 2 19:28:16.713532 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Oct 2 19:28:16.713591 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Oct 2 19:28:16.713627 systemd[1]: Reached target ignition-diskful.target. Oct 2 19:28:16.816588 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (734) Oct 2 19:28:16.816630 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:28:16.816655 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:28:16.816676 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:28:16.816699 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:28:16.729543 systemd[1]: Mounted sysroot-usr.mount. Oct 2 19:28:16.753978 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:28:16.832003 initrd-setup-root[755]: cut: /sysroot/etc/passwd: No such file or directory Oct 2 19:28:16.791647 systemd[1]: Starting initrd-setup-root.service... Oct 2 19:28:16.860987 initrd-setup-root[763]: cut: /sysroot/etc/group: No such file or directory Oct 2 19:28:16.823725 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:28:16.880021 initrd-setup-root[773]: cut: /sysroot/etc/shadow: No such file or directory Oct 2 19:28:16.889955 initrd-setup-root[781]: cut: /sysroot/etc/gshadow: No such file or directory Oct 2 19:28:16.899607 systemd[1]: Finished initrd-setup-root.service. Oct 2 19:28:16.907000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.910137 systemd[1]: Starting ignition-mount.service... Oct 2 19:28:16.929942 systemd[1]: Starting sysroot-boot.service... Oct 2 19:28:16.938139 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Oct 2 19:28:16.938262 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Oct 2 19:28:16.964982 ignition[799]: INFO : Ignition 2.14.0 Oct 2 19:28:16.964982 ignition[799]: INFO : Stage: mount Oct 2 19:28:16.964982 ignition[799]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:16.964982 ignition[799]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:17.063036 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (809) Oct 2 19:28:17.063081 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Oct 2 19:28:17.063105 kernel: BTRFS info (device sda6): using free space tree Oct 2 19:28:17.063126 kernel: BTRFS info (device sda6): has skinny extents Oct 2 19:28:17.063147 kernel: BTRFS info (device sda6): enabling ssd optimizations Oct 2 19:28:16.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:16.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:17.063286 ignition[799]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:17.063286 ignition[799]: INFO : mount: mount passed Oct 2 19:28:17.063286 ignition[799]: INFO : Ignition finished successfully Oct 2 19:28:16.971638 systemd[1]: Finished sysroot-boot.service. Oct 2 19:28:16.981305 systemd[1]: Finished ignition-mount.service. Oct 2 19:28:16.989330 systemd[1]: Starting ignition-files.service... Oct 2 19:28:17.009416 systemd[1]: Mounting sysroot-usr-share-oem.mount... Oct 2 19:28:17.134067 ignition[828]: INFO : Ignition 2.14.0 Oct 2 19:28:17.134067 ignition[828]: INFO : Stage: files Oct 2 19:28:17.134067 ignition[828]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:17.134067 ignition[828]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:17.134067 ignition[828]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:17.134067 ignition[828]: DEBUG : files: compiled without relabeling support, skipping Oct 2 19:28:17.134067 ignition[828]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Oct 2 19:28:17.134067 ignition[828]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Oct 2 19:28:17.240984 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (828) Oct 2 19:28:17.052053 systemd-networkd[683]: eth0: Gained IPv6LL Oct 2 19:28:17.247961 ignition[828]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Oct 2 19:28:17.247961 ignition[828]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Oct 2 19:28:17.247961 ignition[828]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1192466147" Oct 2 19:28:17.247961 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1192466147": device or resource busy Oct 2 19:28:17.247961 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1192466147", trying btrfs: device or resource busy Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1192466147" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1192466147" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1192466147" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1192466147" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:28:17.247961 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Oct 2 19:28:17.073240 systemd[1]: Mounted sysroot-usr-share-oem.mount. Oct 2 19:28:17.494016 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Oct 2 19:28:17.142146 unknown[828]: wrote ssh authorized keys file for user: core Oct 2 19:28:17.578631 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Oct 2 19:28:17.603025 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Oct 2 19:28:17.603025 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:28:17.603025 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.24.2/crictl-v1.24.2-linux-amd64.tar.gz: attempt #1 Oct 2 19:28:17.659002 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Oct 2 19:28:17.716994 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 961188117863ca9af5b084e84691e372efee93ad09daf6a0422e8d75a5803f394d8968064f7ca89f14e8973766201e731241f32538cf2c8d91f0233e786302df Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/crictl-v1.24.2-linux-amd64.tar.gz" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1220518278" Oct 2 19:28:17.740994 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1220518278": device or resource busy Oct 2 19:28:17.740994 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1220518278", trying btrfs: device or resource busy Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1220518278" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1220518278" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem1220518278" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem1220518278" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:28:17.740994 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubeadm: attempt #1 Oct 2 19:28:17.966047 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): GET result: OK Oct 2 19:28:18.016938 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(d): file matches expected sum of: 43b8f213f1732c092e34008d5334e6622a6603f7ec5890c395ac911d50069d0dc11a81fa38436df40fc875a10fee6ee13aa285c017f1de210171065e847c99c5 Oct 2 19:28:18.041038 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/opt/bin/kubeadm" Oct 2 19:28:18.041038 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:28:18.041038 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET https://storage.googleapis.com/kubernetes-release/release/v1.25.10/bin/linux/amd64/kubelet: attempt #1 Oct 2 19:28:18.090023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): GET result: OK Oct 2 19:28:18.817252 ignition[828]: DEBUG : files: createFilesystemsFiles: createFiles: op(e): file matches expected sum of: 82b36a0b83a1d48ef1f70e3ed2a263b3ce935304cdc0606d194b290217fb04f98628b0d82e200b51ccf5c05c718b2476274ae710bb143fffe28dc6bbf8407d54 Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/opt/bin/kubelet" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/install.sh" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/install.sh" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/docker/daemon.json" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800691386" Oct 2 19:28:18.842033 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(11): op(12): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800691386": device or resource busy Oct 2 19:28:18.842033 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(11): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3800691386", trying btrfs: device or resource busy Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800691386" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(13): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3800691386" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [started] unmounting "/mnt/oem3800691386" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): op(14): [finished] unmounting "/mnt/oem3800691386" Oct 2 19:28:18.842033 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Oct 2 19:28:19.278003 kernel: kauditd_printk_skb: 26 callbacks suppressed Oct 2 19:28:19.278061 kernel: audit: type=1130 audit(1696274898.905:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.278088 kernel: audit: type=1130 audit(1696274899.038:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.278111 kernel: audit: type=1130 audit(1696274899.090:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.278133 kernel: audit: type=1131 audit(1696274899.090:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.278178 kernel: audit: type=1130 audit(1696274899.193:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.278204 kernel: audit: type=1131 audit(1696274899.193:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.905000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.836040 systemd[1]: mnt-oem3800691386.mount: Deactivated successfully. Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): oem config not found in "/usr/share/oem", looking on oem partition Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2981271258" Oct 2 19:28:19.295023 ignition[828]: CRITICAL : files: createFilesystemsFiles: createFiles: op(15): op(16): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2981271258": device or resource busy Oct 2 19:28:19.295023 ignition[828]: ERROR : files: createFilesystemsFiles: createFiles: op(15): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2981271258", trying btrfs: device or resource busy Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2981271258" Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(17): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2981271258" Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [started] unmounting "/mnt/oem2981271258" Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): op(18): [finished] unmounting "/mnt/oem2981271258" Oct 2 19:28:19.295023 ignition[828]: INFO : files: createFilesystemsFiles: createFiles: op(15): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(19): [started] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(19): [finished] processing unit "coreos-metadata-sshkeys@.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(1a): [started] processing unit "oem-gce.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(1a): [finished] processing unit "oem-gce.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(1b): [started] processing unit "oem-gce-enable-oslogin.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(1b): [finished] processing unit "oem-gce-enable-oslogin.service" Oct 2 19:28:19.295023 ignition[828]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Oct 2 19:28:19.658163 kernel: audit: type=1130 audit(1696274899.344:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.658213 kernel: audit: type=1131 audit(1696274899.515:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.515000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.859299 systemd[1]: mnt-oem2981271258.mount: Deactivated successfully. Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(20): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(20): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(24): [started] setting preset to enabled for "prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-critools.service" Oct 2 19:28:19.685224 ignition[828]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:28:19.685224 ignition[828]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Oct 2 19:28:19.685224 ignition[828]: INFO : files: files passed Oct 2 19:28:20.088003 kernel: audit: type=1131 audit(1696274899.832:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.088052 kernel: audit: type=1131 audit(1696274899.899:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.832000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.877215 systemd[1]: Finished ignition-files.service. Oct 2 19:28:20.102032 initrd-setup-root-after-ignition[851]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Oct 2 19:28:20.116250 iscsid[693]: iscsid shutting down. Oct 2 19:28:20.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.132113 ignition[828]: INFO : Ignition finished successfully Oct 2 19:28:20.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.917437 systemd[1]: Starting initrd-setup-root-after-ignition.service... Oct 2 19:28:20.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.957007 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Oct 2 19:28:20.174000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.958211 systemd[1]: Starting ignition-quench.service... Oct 2 19:28:20.198000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:18.995468 systemd[1]: Finished initrd-setup-root-after-ignition.service. Oct 2 19:28:20.214000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.040586 systemd[1]: ignition-quench.service: Deactivated successfully. Oct 2 19:28:20.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.040744 systemd[1]: Finished ignition-quench.service. Oct 2 19:28:20.246000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.092208 systemd[1]: Reached target ignition-complete.target. Oct 2 19:28:20.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.263292 ignition[866]: INFO : Ignition 2.14.0 Oct 2 19:28:20.263292 ignition[866]: INFO : Stage: umount Oct 2 19:28:20.263292 ignition[866]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Oct 2 19:28:20.263292 ignition[866]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Oct 2 19:28:20.263292 ignition[866]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Oct 2 19:28:20.263292 ignition[866]: INFO : umount: umount passed Oct 2 19:28:20.263292 ignition[866]: INFO : Ignition finished successfully Oct 2 19:28:20.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.150154 systemd[1]: Starting initrd-parse-etc.service... Oct 2 19:28:20.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.193648 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Oct 2 19:28:19.193775 systemd[1]: Finished initrd-parse-etc.service. Oct 2 19:28:20.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.195343 systemd[1]: Reached target initrd-fs.target. Oct 2 19:28:20.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.256197 systemd[1]: Reached target initrd.target. Oct 2 19:28:19.285348 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Oct 2 19:28:19.286580 systemd[1]: Starting dracut-pre-pivot.service... Oct 2 19:28:19.321330 systemd[1]: Finished dracut-pre-pivot.service. Oct 2 19:28:19.347614 systemd[1]: Starting initrd-cleanup.service... Oct 2 19:28:19.392516 systemd[1]: Stopped target nss-lookup.target. Oct 2 19:28:19.407384 systemd[1]: Stopped target remote-cryptsetup.target. Oct 2 19:28:20.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.434407 systemd[1]: Stopped target timers.target. Oct 2 19:28:20.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.537000 audit: BPF prog-id=6 op=UNLOAD Oct 2 19:28:19.480330 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Oct 2 19:28:19.480592 systemd[1]: Stopped dracut-pre-pivot.service. Oct 2 19:28:19.517598 systemd[1]: Stopped target initrd.target. Oct 2 19:28:20.582000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.573332 systemd[1]: Stopped target basic.target. Oct 2 19:28:20.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.586391 systemd[1]: Stopped target ignition-complete.target. Oct 2 19:28:20.612000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.603396 systemd[1]: Stopped target ignition-diskful.target. Oct 2 19:28:19.621376 systemd[1]: Stopped target initrd-root-device.target. Oct 2 19:28:20.636000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.639454 systemd[1]: Stopped target remote-fs.target. Oct 2 19:28:19.676370 systemd[1]: Stopped target remote-fs-pre.target. Oct 2 19:28:19.693366 systemd[1]: Stopped target sysinit.target. Oct 2 19:28:20.689000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.737441 systemd[1]: Stopped target local-fs.target. Oct 2 19:28:20.704000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.768309 systemd[1]: Stopped target local-fs-pre.target. Oct 2 19:28:20.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.785353 systemd[1]: Stopped target swap.target. Oct 2 19:28:19.809315 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Oct 2 19:28:19.809499 systemd[1]: Stopped dracut-pre-mount.service. Oct 2 19:28:20.750000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.834499 systemd[1]: Stopped target cryptsetup.target. Oct 2 19:28:20.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.888357 systemd[1]: dracut-initqueue.service: Deactivated successfully. Oct 2 19:28:20.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:20.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:19.888572 systemd[1]: Stopped dracut-initqueue.service. Oct 2 19:28:19.901622 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Oct 2 19:28:19.901979 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Oct 2 19:28:19.960449 systemd[1]: ignition-files.service: Deactivated successfully. Oct 2 19:28:19.960628 systemd[1]: Stopped ignition-files.service. Oct 2 19:28:20.861972 systemd-journald[190]: Received SIGTERM from PID 1 (n/a). Oct 2 19:28:19.976064 systemd[1]: Stopping ignition-mount.service... Oct 2 19:28:20.015452 systemd[1]: Stopping iscsid.service... Oct 2 19:28:20.045033 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Oct 2 19:28:20.045436 systemd[1]: Stopped kmod-static-nodes.service. Oct 2 19:28:20.083009 systemd[1]: Stopping sysroot-boot.service... Oct 2 19:28:20.095031 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Oct 2 19:28:20.095384 systemd[1]: Stopped systemd-udev-trigger.service. Oct 2 19:28:20.125424 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Oct 2 19:28:20.125604 systemd[1]: Stopped dracut-pre-trigger.service. Oct 2 19:28:20.144669 systemd[1]: sysroot-boot.mount: Deactivated successfully. Oct 2 19:28:20.145727 systemd[1]: iscsid.service: Deactivated successfully. Oct 2 19:28:20.145913 systemd[1]: Stopped iscsid.service. Oct 2 19:28:20.157000 systemd[1]: ignition-mount.service: Deactivated successfully. Oct 2 19:28:20.157142 systemd[1]: Stopped ignition-mount.service. Oct 2 19:28:20.177046 systemd[1]: sysroot-boot.service: Deactivated successfully. Oct 2 19:28:20.177204 systemd[1]: Stopped sysroot-boot.service. Oct 2 19:28:20.200788 systemd[1]: ignition-disks.service: Deactivated successfully. Oct 2 19:28:20.200977 systemd[1]: Stopped ignition-disks.service. Oct 2 19:28:20.216104 systemd[1]: ignition-kargs.service: Deactivated successfully. Oct 2 19:28:20.216231 systemd[1]: Stopped ignition-kargs.service. Oct 2 19:28:20.232156 systemd[1]: ignition-fetch.service: Deactivated successfully. Oct 2 19:28:20.232232 systemd[1]: Stopped ignition-fetch.service. Oct 2 19:28:20.248137 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Oct 2 19:28:20.248216 systemd[1]: Stopped ignition-fetch-offline.service. Oct 2 19:28:20.263080 systemd[1]: Stopped target paths.target. Oct 2 19:28:20.277002 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Oct 2 19:28:20.280930 systemd[1]: Stopped systemd-ask-password-console.path. Oct 2 19:28:20.293014 systemd[1]: Stopped target slices.target. Oct 2 19:28:20.306002 systemd[1]: Stopped target sockets.target. Oct 2 19:28:20.323056 systemd[1]: iscsid.socket: Deactivated successfully. Oct 2 19:28:20.323134 systemd[1]: Closed iscsid.socket. Oct 2 19:28:20.343230 systemd[1]: ignition-setup.service: Deactivated successfully. Oct 2 19:28:20.343306 systemd[1]: Stopped ignition-setup.service. Oct 2 19:28:20.372259 systemd[1]: initrd-setup-root.service: Deactivated successfully. Oct 2 19:28:20.372338 systemd[1]: Stopped initrd-setup-root.service. Oct 2 19:28:20.388399 systemd[1]: Stopping iscsiuio.service... Oct 2 19:28:20.403612 systemd[1]: iscsiuio.service: Deactivated successfully. Oct 2 19:28:20.403741 systemd[1]: Stopped iscsiuio.service. Oct 2 19:28:20.418414 systemd[1]: initrd-cleanup.service: Deactivated successfully. Oct 2 19:28:20.418555 systemd[1]: Finished initrd-cleanup.service. Oct 2 19:28:20.435217 systemd[1]: Stopped target network.target. Oct 2 19:28:20.450015 systemd[1]: iscsiuio.socket: Deactivated successfully. Oct 2 19:28:20.450097 systemd[1]: Closed iscsiuio.socket. Oct 2 19:28:20.464242 systemd[1]: Stopping systemd-networkd.service... Oct 2 19:28:20.467907 systemd-networkd[683]: eth0: DHCPv6 lease lost Oct 2 19:28:20.870000 audit: BPF prog-id=9 op=UNLOAD Oct 2 19:28:20.488295 systemd[1]: Stopping systemd-resolved.service... Oct 2 19:28:20.502427 systemd[1]: systemd-resolved.service: Deactivated successfully. Oct 2 19:28:20.502547 systemd[1]: Stopped systemd-resolved.service. Oct 2 19:28:20.523752 systemd[1]: systemd-networkd.service: Deactivated successfully. Oct 2 19:28:20.523906 systemd[1]: Stopped systemd-networkd.service. Oct 2 19:28:20.539875 systemd[1]: systemd-networkd.socket: Deactivated successfully. Oct 2 19:28:20.539924 systemd[1]: Closed systemd-networkd.socket. Oct 2 19:28:20.555154 systemd[1]: Stopping network-cleanup.service... Oct 2 19:28:20.568957 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Oct 2 19:28:20.569085 systemd[1]: Stopped parse-ip-for-networkd.service. Oct 2 19:28:20.584102 systemd[1]: systemd-sysctl.service: Deactivated successfully. Oct 2 19:28:20.584203 systemd[1]: Stopped systemd-sysctl.service. Oct 2 19:28:20.599284 systemd[1]: systemd-modules-load.service: Deactivated successfully. Oct 2 19:28:20.599350 systemd[1]: Stopped systemd-modules-load.service. Oct 2 19:28:20.614252 systemd[1]: Stopping systemd-udevd.service... Oct 2 19:28:20.631573 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Oct 2 19:28:20.632301 systemd[1]: systemd-udevd.service: Deactivated successfully. Oct 2 19:28:20.632450 systemd[1]: Stopped systemd-udevd.service. Oct 2 19:28:20.639697 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Oct 2 19:28:20.639798 systemd[1]: Closed systemd-udevd-control.socket. Oct 2 19:28:20.660100 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Oct 2 19:28:20.660169 systemd[1]: Closed systemd-udevd-kernel.socket. Oct 2 19:28:20.675029 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Oct 2 19:28:20.675123 systemd[1]: Stopped dracut-pre-udev.service. Oct 2 19:28:20.691235 systemd[1]: dracut-cmdline.service: Deactivated successfully. Oct 2 19:28:20.691310 systemd[1]: Stopped dracut-cmdline.service. Oct 2 19:28:20.706212 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Oct 2 19:28:20.706282 systemd[1]: Stopped dracut-cmdline-ask.service. Oct 2 19:28:20.722331 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Oct 2 19:28:20.742946 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Oct 2 19:28:20.743073 systemd[1]: Stopped systemd-vconsole-setup.service. Oct 2 19:28:20.752791 systemd[1]: network-cleanup.service: Deactivated successfully. Oct 2 19:28:20.752985 systemd[1]: Stopped network-cleanup.service. Oct 2 19:28:20.775473 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Oct 2 19:28:20.775586 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Oct 2 19:28:20.790356 systemd[1]: Reached target initrd-switch-root.target. Oct 2 19:28:20.808096 systemd[1]: Starting initrd-switch-root.service... Oct 2 19:28:20.828981 systemd[1]: Switching root. Oct 2 19:28:20.873626 systemd-journald[190]: Journal stopped Oct 2 19:28:25.610250 kernel: SELinux: Class mctp_socket not defined in policy. Oct 2 19:28:25.610366 kernel: SELinux: Class anon_inode not defined in policy. Oct 2 19:28:25.610403 kernel: SELinux: the above unknown classes and permissions will be allowed Oct 2 19:28:25.610427 kernel: SELinux: policy capability network_peer_controls=1 Oct 2 19:28:25.610462 kernel: SELinux: policy capability open_perms=1 Oct 2 19:28:25.610484 kernel: SELinux: policy capability extended_socket_class=1 Oct 2 19:28:25.610507 kernel: SELinux: policy capability always_check_network=0 Oct 2 19:28:25.610536 kernel: SELinux: policy capability cgroup_seclabel=1 Oct 2 19:28:25.610557 kernel: SELinux: policy capability nnp_nosuid_transition=1 Oct 2 19:28:25.610580 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Oct 2 19:28:25.610603 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Oct 2 19:28:25.610628 systemd[1]: Successfully loaded SELinux policy in 113.663ms. Oct 2 19:28:25.610670 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 10.809ms. Oct 2 19:28:25.610699 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Oct 2 19:28:25.610725 systemd[1]: Detected virtualization kvm. Oct 2 19:28:25.610758 systemd[1]: Detected architecture x86-64. Oct 2 19:28:25.610781 systemd[1]: Detected first boot. Oct 2 19:28:25.610804 systemd[1]: Initializing machine ID from VM UUID. Oct 2 19:28:25.612310 systemd[1]: Populated /etc with preset unit settings. Oct 2 19:28:25.612341 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:28:25.612367 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:28:25.612399 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:28:25.612435 kernel: kauditd_printk_skb: 39 callbacks suppressed Oct 2 19:28:25.612460 kernel: audit: type=1334 audit(1696274904.720:86): prog-id=12 op=LOAD Oct 2 19:28:25.612481 kernel: audit: type=1334 audit(1696274904.720:87): prog-id=3 op=UNLOAD Oct 2 19:28:25.612501 kernel: audit: type=1334 audit(1696274904.726:88): prog-id=13 op=LOAD Oct 2 19:28:25.613460 kernel: audit: type=1334 audit(1696274904.733:89): prog-id=14 op=LOAD Oct 2 19:28:25.613488 kernel: audit: type=1334 audit(1696274904.733:90): prog-id=4 op=UNLOAD Oct 2 19:28:25.613516 kernel: audit: type=1334 audit(1696274904.733:91): prog-id=5 op=UNLOAD Oct 2 19:28:25.613544 kernel: audit: type=1334 audit(1696274904.740:92): prog-id=15 op=LOAD Oct 2 19:28:25.613566 kernel: audit: type=1334 audit(1696274904.740:93): prog-id=12 op=UNLOAD Oct 2 19:28:25.613588 kernel: audit: type=1334 audit(1696274904.768:94): prog-id=16 op=LOAD Oct 2 19:28:25.613610 kernel: audit: type=1334 audit(1696274904.774:95): prog-id=17 op=LOAD Oct 2 19:28:25.613633 systemd[1]: initrd-switch-root.service: Deactivated successfully. Oct 2 19:28:25.613676 systemd[1]: Stopped initrd-switch-root.service. Oct 2 19:28:25.613708 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Oct 2 19:28:25.613733 systemd[1]: Created slice system-addon\x2dconfig.slice. Oct 2 19:28:25.613760 systemd[1]: Created slice system-addon\x2drun.slice. Oct 2 19:28:25.613784 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Oct 2 19:28:25.613822 systemd[1]: Created slice system-getty.slice. Oct 2 19:28:25.613846 systemd[1]: Created slice system-modprobe.slice. Oct 2 19:28:25.613871 systemd[1]: Created slice system-serial\x2dgetty.slice. Oct 2 19:28:25.613895 systemd[1]: Created slice system-system\x2dcloudinit.slice. Oct 2 19:28:25.613920 systemd[1]: Created slice system-systemd\x2dfsck.slice. Oct 2 19:28:25.613954 systemd[1]: Created slice user.slice. Oct 2 19:28:25.613978 systemd[1]: Started systemd-ask-password-console.path. Oct 2 19:28:25.614002 systemd[1]: Started systemd-ask-password-wall.path. Oct 2 19:28:25.614025 systemd[1]: Set up automount boot.automount. Oct 2 19:28:25.614048 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Oct 2 19:28:25.614072 systemd[1]: Stopped target initrd-switch-root.target. Oct 2 19:28:25.614096 systemd[1]: Stopped target initrd-fs.target. Oct 2 19:28:25.614119 systemd[1]: Stopped target initrd-root-fs.target. Oct 2 19:28:25.614144 systemd[1]: Reached target integritysetup.target. Oct 2 19:28:25.614170 systemd[1]: Reached target remote-cryptsetup.target. Oct 2 19:28:25.614194 systemd[1]: Reached target remote-fs.target. Oct 2 19:28:25.614217 systemd[1]: Reached target slices.target. Oct 2 19:28:25.614239 systemd[1]: Reached target swap.target. Oct 2 19:28:25.614269 systemd[1]: Reached target torcx.target. Oct 2 19:28:25.614293 systemd[1]: Reached target veritysetup.target. Oct 2 19:28:25.614317 systemd[1]: Listening on systemd-coredump.socket. Oct 2 19:28:25.614339 systemd[1]: Listening on systemd-initctl.socket. Oct 2 19:28:25.614364 systemd[1]: Listening on systemd-networkd.socket. Oct 2 19:28:25.614388 systemd[1]: Listening on systemd-udevd-control.socket. Oct 2 19:28:25.614416 systemd[1]: Listening on systemd-udevd-kernel.socket. Oct 2 19:28:25.614440 systemd[1]: Listening on systemd-userdbd.socket. Oct 2 19:28:25.614464 systemd[1]: Mounting dev-hugepages.mount... Oct 2 19:28:25.614488 systemd[1]: Mounting dev-mqueue.mount... Oct 2 19:28:25.614510 systemd[1]: Mounting media.mount... Oct 2 19:28:25.614535 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:28:25.614558 systemd[1]: Mounting sys-kernel-debug.mount... Oct 2 19:28:25.614581 systemd[1]: Mounting sys-kernel-tracing.mount... Oct 2 19:28:25.614603 systemd[1]: Mounting tmp.mount... Oct 2 19:28:25.614629 systemd[1]: Starting flatcar-tmpfiles.service... Oct 2 19:28:25.614653 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Oct 2 19:28:25.614677 systemd[1]: Starting kmod-static-nodes.service... Oct 2 19:28:25.614701 systemd[1]: Starting modprobe@configfs.service... Oct 2 19:28:25.614724 systemd[1]: Starting modprobe@dm_mod.service... Oct 2 19:28:25.614747 systemd[1]: Starting modprobe@drm.service... Oct 2 19:28:25.614770 systemd[1]: Starting modprobe@efi_pstore.service... Oct 2 19:28:25.614792 systemd[1]: Starting modprobe@fuse.service... Oct 2 19:28:25.614827 systemd[1]: Starting modprobe@loop.service... Oct 2 19:28:25.614854 kernel: fuse: init (API version 7.34) Oct 2 19:28:25.614878 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Oct 2 19:28:25.614902 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Oct 2 19:28:25.614925 systemd[1]: Stopped systemd-fsck-root.service. Oct 2 19:28:25.614950 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Oct 2 19:28:25.614972 kernel: loop: module loaded Oct 2 19:28:25.614995 systemd[1]: Stopped systemd-fsck-usr.service. Oct 2 19:28:25.615017 systemd[1]: Stopped systemd-journald.service. Oct 2 19:28:25.615042 systemd[1]: Starting systemd-journald.service... Oct 2 19:28:25.615068 systemd[1]: Starting systemd-modules-load.service... Oct 2 19:28:25.615093 systemd[1]: Starting systemd-network-generator.service... Oct 2 19:28:25.615117 systemd[1]: Starting systemd-remount-fs.service... Oct 2 19:28:25.615144 systemd-journald[990]: Journal started Oct 2 19:28:25.615230 systemd-journald[990]: Runtime Journal (/run/log/journal/e652fbe180bc2201784a1b29740206da) is 8.0M, max 148.8M, 140.8M free. Oct 2 19:28:21.176000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Oct 2 19:28:21.326000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:28:21.326000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Oct 2 19:28:21.326000 audit: BPF prog-id=10 op=LOAD Oct 2 19:28:21.326000 audit: BPF prog-id=10 op=UNLOAD Oct 2 19:28:21.326000 audit: BPF prog-id=11 op=LOAD Oct 2 19:28:21.327000 audit: BPF prog-id=11 op=UNLOAD Oct 2 19:28:24.720000 audit: BPF prog-id=12 op=LOAD Oct 2 19:28:24.720000 audit: BPF prog-id=3 op=UNLOAD Oct 2 19:28:24.726000 audit: BPF prog-id=13 op=LOAD Oct 2 19:28:24.733000 audit: BPF prog-id=14 op=LOAD Oct 2 19:28:24.733000 audit: BPF prog-id=4 op=UNLOAD Oct 2 19:28:24.733000 audit: BPF prog-id=5 op=UNLOAD Oct 2 19:28:24.740000 audit: BPF prog-id=15 op=LOAD Oct 2 19:28:24.740000 audit: BPF prog-id=12 op=UNLOAD Oct 2 19:28:24.768000 audit: BPF prog-id=16 op=LOAD Oct 2 19:28:24.774000 audit: BPF prog-id=17 op=LOAD Oct 2 19:28:24.775000 audit: BPF prog-id=13 op=UNLOAD Oct 2 19:28:24.775000 audit: BPF prog-id=14 op=UNLOAD Oct 2 19:28:24.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:24.804000 audit: BPF prog-id=15 op=UNLOAD Oct 2 19:28:24.820000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:24.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.562000 audit: BPF prog-id=18 op=LOAD Oct 2 19:28:25.562000 audit: BPF prog-id=19 op=LOAD Oct 2 19:28:25.562000 audit: BPF prog-id=20 op=LOAD Oct 2 19:28:25.562000 audit: BPF prog-id=16 op=UNLOAD Oct 2 19:28:25.562000 audit: BPF prog-id=17 op=UNLOAD Oct 2 19:28:25.606000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Oct 2 19:28:25.606000 audit[990]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffcc31a3930 a2=4000 a3=7ffcc31a39cc items=0 ppid=1 pid=990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:25.606000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Oct 2 19:28:24.720471 systemd[1]: Queued start job for default target multi-user.target. Oct 2 19:28:21.505978 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:28:24.778088 systemd[1]: systemd-journald.service: Deactivated successfully. Oct 2 19:28:21.507084 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:28:21.507121 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:28:21.507176 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Oct 2 19:28:21.507196 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="skipped missing lower profile" missing profile=oem Oct 2 19:28:21.507256 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Oct 2 19:28:21.507281 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Oct 2 19:28:21.507635 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Oct 2 19:28:21.507711 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Oct 2 19:28:21.507738 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Oct 2 19:28:21.508856 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Oct 2 19:28:21.508923 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Oct 2 19:28:21.508958 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.0: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.0 Oct 2 19:28:21.508985 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Oct 2 19:28:21.509017 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.0: no such file or directory" path=/var/lib/torcx/store/3510.3.0 Oct 2 19:28:21.509045 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:21Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Oct 2 19:28:24.113158 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:28:24.113482 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:28:24.113631 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:28:24.113900 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Oct 2 19:28:24.113963 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Oct 2 19:28:24.114037 /usr/lib/systemd/system-generators/torcx-generator[899]: time="2023-10-02T19:28:24Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Oct 2 19:28:25.633862 systemd[1]: Starting systemd-udev-trigger.service... Oct 2 19:28:25.647838 systemd[1]: verity-setup.service: Deactivated successfully. Oct 2 19:28:25.654442 systemd[1]: Stopped verity-setup.service. Oct 2 19:28:25.658000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.672839 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Oct 2 19:28:25.681960 systemd[1]: Started systemd-journald.service. Oct 2 19:28:25.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.691264 systemd[1]: Mounted dev-hugepages.mount. Oct 2 19:28:25.698209 systemd[1]: Mounted dev-mqueue.mount. Oct 2 19:28:25.705215 systemd[1]: Mounted media.mount. Oct 2 19:28:25.713172 systemd[1]: Mounted sys-kernel-debug.mount. Oct 2 19:28:25.723153 systemd[1]: Mounted sys-kernel-tracing.mount. Oct 2 19:28:25.732151 systemd[1]: Mounted tmp.mount. Oct 2 19:28:25.739303 systemd[1]: Finished flatcar-tmpfiles.service. Oct 2 19:28:25.746000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.748447 systemd[1]: Finished kmod-static-nodes.service. Oct 2 19:28:25.755000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.757422 systemd[1]: modprobe@configfs.service: Deactivated successfully. Oct 2 19:28:25.757644 systemd[1]: Finished modprobe@configfs.service. Oct 2 19:28:25.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.766435 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Oct 2 19:28:25.766647 systemd[1]: Finished modprobe@dm_mod.service. Oct 2 19:28:25.773000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.775414 systemd[1]: modprobe@drm.service: Deactivated successfully. Oct 2 19:28:25.775631 systemd[1]: Finished modprobe@drm.service. Oct 2 19:28:25.782000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.784450 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Oct 2 19:28:25.784661 systemd[1]: Finished modprobe@efi_pstore.service. Oct 2 19:28:25.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.791000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.793423 systemd[1]: modprobe@fuse.service: Deactivated successfully. Oct 2 19:28:25.793637 systemd[1]: Finished modprobe@fuse.service. Oct 2 19:28:25.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.802409 systemd[1]: modprobe@loop.service: Deactivated successfully. Oct 2 19:28:25.802623 systemd[1]: Finished modprobe@loop.service. Oct 2 19:28:25.809000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.811443 systemd[1]: Finished systemd-modules-load.service. Oct 2 19:28:25.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.820422 systemd[1]: Finished systemd-network-generator.service. Oct 2 19:28:25.827000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.829435 systemd[1]: Finished systemd-remount-fs.service. Oct 2 19:28:25.836000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.838420 systemd[1]: Finished systemd-udev-trigger.service. Oct 2 19:28:25.845000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.847719 systemd[1]: Reached target network-pre.target. Oct 2 19:28:25.857445 systemd[1]: Mounting sys-fs-fuse-connections.mount... Oct 2 19:28:25.867405 systemd[1]: Mounting sys-kernel-config.mount... Oct 2 19:28:25.874984 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Oct 2 19:28:25.878055 systemd[1]: Starting systemd-hwdb-update.service... Oct 2 19:28:25.887721 systemd[1]: Starting systemd-journal-flush.service... Oct 2 19:28:25.896020 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Oct 2 19:28:25.897872 systemd[1]: Starting systemd-random-seed.service... Oct 2 19:28:25.905020 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Oct 2 19:28:25.906885 systemd[1]: Starting systemd-sysctl.service... Oct 2 19:28:25.909452 systemd-journald[990]: Time spent on flushing to /var/log/journal/e652fbe180bc2201784a1b29740206da is 64.973ms for 1150 entries. Oct 2 19:28:25.909452 systemd-journald[990]: System Journal (/var/log/journal/e652fbe180bc2201784a1b29740206da) is 8.0M, max 584.8M, 576.8M free. Oct 2 19:28:26.022503 systemd-journald[990]: Received client request to flush runtime journal. Oct 2 19:28:25.969000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.000000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:25.923889 systemd[1]: Starting systemd-sysusers.service... Oct 2 19:28:25.932782 systemd[1]: Starting systemd-udev-settle.service... Oct 2 19:28:25.943489 systemd[1]: Mounted sys-fs-fuse-connections.mount. Oct 2 19:28:26.025363 udevadm[1004]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Oct 2 19:28:25.952129 systemd[1]: Mounted sys-kernel-config.mount. Oct 2 19:28:25.962356 systemd[1]: Finished systemd-random-seed.service. Oct 2 19:28:25.971411 systemd[1]: Finished systemd-sysctl.service. Oct 2 19:28:25.983754 systemd[1]: Reached target first-boot-complete.target. Oct 2 19:28:25.993839 systemd[1]: Finished systemd-sysusers.service. Oct 2 19:28:26.023914 systemd[1]: Finished systemd-journal-flush.service. Oct 2 19:28:26.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.598230 systemd[1]: Finished systemd-hwdb-update.service. Oct 2 19:28:26.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.606000 audit: BPF prog-id=21 op=LOAD Oct 2 19:28:26.606000 audit: BPF prog-id=22 op=LOAD Oct 2 19:28:26.606000 audit: BPF prog-id=7 op=UNLOAD Oct 2 19:28:26.606000 audit: BPF prog-id=8 op=UNLOAD Oct 2 19:28:26.608960 systemd[1]: Starting systemd-udevd.service... Oct 2 19:28:26.631996 systemd-udevd[1007]: Using default interface naming scheme 'v252'. Oct 2 19:28:26.683312 systemd[1]: Started systemd-udevd.service. Oct 2 19:28:26.690000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.693000 audit: BPF prog-id=23 op=LOAD Oct 2 19:28:26.696032 systemd[1]: Starting systemd-networkd.service... Oct 2 19:28:26.708000 audit: BPF prog-id=24 op=LOAD Oct 2 19:28:26.709000 audit: BPF prog-id=25 op=LOAD Oct 2 19:28:26.709000 audit: BPF prog-id=26 op=LOAD Oct 2 19:28:26.712007 systemd[1]: Starting systemd-userdbd.service... Oct 2 19:28:26.768978 systemd[1]: Started systemd-userdbd.service. Oct 2 19:28:26.776000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.778204 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Oct 2 19:28:26.907179 systemd-networkd[1020]: lo: Link UP Oct 2 19:28:26.907192 systemd-networkd[1020]: lo: Gained carrier Oct 2 19:28:26.907919 systemd-networkd[1020]: Enumeration completed Oct 2 19:28:26.908055 systemd[1]: Started systemd-networkd.service. Oct 2 19:28:26.910177 systemd-networkd[1020]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Oct 2 19:28:26.912530 systemd-networkd[1020]: eth0: Link UP Oct 2 19:28:26.912542 systemd-networkd[1020]: eth0: Gained carrier Oct 2 19:28:26.922858 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Oct 2 19:28:26.921000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:26.931135 systemd-networkd[1020]: eth0: DHCPv4 address 10.128.0.55/32, gateway 10.128.0.1 acquired from 169.254.169.254 Oct 2 19:28:26.945106 kernel: ACPI: button: Power Button [PWRF] Oct 2 19:28:26.945233 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input3 Oct 2 19:28:26.949907 kernel: ACPI: button: Sleep Button [SLPF] Oct 2 19:28:26.938000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Oct 2 19:28:26.938000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=56498928fb70 a1=32194 a2=7f0137e06bc5 a3=5 items=106 ppid=1007 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:26.938000 audit: CWD cwd="/" Oct 2 19:28:26.938000 audit: PATH item=0 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=1 name=(null) inode=14047 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=2 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=3 name=(null) inode=14048 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=4 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=5 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=6 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=7 name=(null) inode=14050 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=8 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=9 name=(null) inode=14051 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=10 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=11 name=(null) inode=14052 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=12 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=13 name=(null) inode=14053 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=14 name=(null) inode=14049 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=15 name=(null) inode=14054 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=16 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=17 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=18 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=19 name=(null) inode=14056 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=20 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=21 name=(null) inode=14057 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=22 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=23 name=(null) inode=14058 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=24 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=25 name=(null) inode=14059 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=26 name=(null) inode=14055 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=27 name=(null) inode=14060 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=28 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=29 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=30 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=31 name=(null) inode=14062 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=32 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=33 name=(null) inode=14063 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=34 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=35 name=(null) inode=14064 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=36 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=37 name=(null) inode=14065 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=38 name=(null) inode=14061 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=39 name=(null) inode=14066 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=40 name=(null) inode=14046 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=41 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=42 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=43 name=(null) inode=14068 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=44 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=45 name=(null) inode=14069 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=46 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=47 name=(null) inode=14070 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=48 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=49 name=(null) inode=14071 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=50 name=(null) inode=14067 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=51 name=(null) inode=14072 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=52 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=53 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=54 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=55 name=(null) inode=14074 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=56 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=57 name=(null) inode=14075 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=58 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=59 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=60 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=61 name=(null) inode=14077 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=62 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=63 name=(null) inode=14078 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=64 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=65 name=(null) inode=14079 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=66 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=67 name=(null) inode=14080 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=68 name=(null) inode=14076 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=69 name=(null) inode=14081 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=70 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=71 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=72 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=73 name=(null) inode=14083 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=74 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=75 name=(null) inode=14084 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=76 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=77 name=(null) inode=14085 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=78 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=79 name=(null) inode=14086 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:27.000870 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1022) Oct 2 19:28:26.938000 audit: PATH item=80 name=(null) inode=14082 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=81 name=(null) inode=14087 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=82 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=83 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=84 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=85 name=(null) inode=14089 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=86 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=87 name=(null) inode=14090 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=88 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=89 name=(null) inode=14091 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=90 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=91 name=(null) inode=14092 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=92 name=(null) inode=14088 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=93 name=(null) inode=14093 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=94 name=(null) inode=14073 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=95 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=96 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=97 name=(null) inode=14095 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=98 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=99 name=(null) inode=14096 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=100 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=101 name=(null) inode=14097 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=102 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=103 name=(null) inode=14098 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=104 name=(null) inode=14094 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PATH item=105 name=(null) inode=14099 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Oct 2 19:28:26.938000 audit: PROCTITLE proctitle="(udev-worker)" Oct 2 19:28:27.021840 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Oct 2 19:28:27.026865 kernel: EDAC MC: Ver: 3.0.0 Oct 2 19:28:27.072875 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input4 Oct 2 19:28:27.100210 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Oct 2 19:28:27.104015 kernel: mousedev: PS/2 mouse device common for all mice Oct 2 19:28:27.119465 systemd[1]: Finished systemd-udev-settle.service. Oct 2 19:28:27.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.129721 systemd[1]: Starting lvm2-activation-early.service... Oct 2 19:28:27.160497 lvm[1044]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:28:27.189171 systemd[1]: Finished lvm2-activation-early.service. Oct 2 19:28:27.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.198156 systemd[1]: Reached target cryptsetup.target. Oct 2 19:28:27.208550 systemd[1]: Starting lvm2-activation.service... Oct 2 19:28:27.214506 lvm[1045]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Oct 2 19:28:27.244340 systemd[1]: Finished lvm2-activation.service. Oct 2 19:28:27.251000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.253190 systemd[1]: Reached target local-fs-pre.target. Oct 2 19:28:27.262013 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Oct 2 19:28:27.262066 systemd[1]: Reached target local-fs.target. Oct 2 19:28:27.270040 systemd[1]: Reached target machines.target. Oct 2 19:28:27.280633 systemd[1]: Starting ldconfig.service... Oct 2 19:28:27.288921 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Oct 2 19:28:27.289013 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:28:27.290567 systemd[1]: Starting systemd-boot-update.service... Oct 2 19:28:27.300704 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Oct 2 19:28:27.312685 systemd[1]: Starting systemd-machine-id-commit.service... Oct 2 19:28:27.313171 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:28:27.313282 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Oct 2 19:28:27.315496 systemd[1]: Starting systemd-tmpfiles-setup.service... Oct 2 19:28:27.316523 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1047 (bootctl) Oct 2 19:28:27.320014 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Oct 2 19:28:27.341858 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Oct 2 19:28:27.350000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.366496 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Oct 2 19:28:27.379861 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Oct 2 19:28:27.395229 systemd-tmpfiles[1051]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Oct 2 19:28:27.504880 systemd-fsck[1056]: fsck.fat 4.2 (2021-01-31) Oct 2 19:28:27.504880 systemd-fsck[1056]: /dev/sda1: 789 files, 115069/258078 clusters Oct 2 19:28:27.509480 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Oct 2 19:28:27.517000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.520991 systemd[1]: Mounting boot.mount... Oct 2 19:28:27.587206 systemd[1]: Mounted boot.mount. Oct 2 19:28:27.612006 systemd[1]: Finished systemd-boot-update.service. Oct 2 19:28:27.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.761042 systemd[1]: Finished systemd-tmpfiles-setup.service. Oct 2 19:28:27.768000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.772067 systemd[1]: Starting audit-rules.service... Oct 2 19:28:27.781525 systemd[1]: Starting clean-ca-certificates.service... Oct 2 19:28:27.791903 systemd[1]: Starting oem-gce-enable-oslogin.service... Oct 2 19:28:27.802231 systemd[1]: Starting systemd-journal-catalog-update.service... Oct 2 19:28:27.812000 audit: BPF prog-id=27 op=LOAD Oct 2 19:28:27.816136 systemd[1]: Starting systemd-resolved.service... Oct 2 19:28:27.822000 audit: BPF prog-id=28 op=LOAD Oct 2 19:28:27.826030 systemd[1]: Starting systemd-timesyncd.service... Oct 2 19:28:27.841000 audit[1082]: SYSTEM_BOOT pid=1082 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.835000 systemd[1]: Starting systemd-update-utmp.service... Oct 2 19:28:27.844503 systemd[1]: Finished clean-ca-certificates.service. Oct 2 19:28:27.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.853432 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Oct 2 19:28:27.853705 systemd[1]: Finished oem-gce-enable-oslogin.service. Oct 2 19:28:27.860000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.860000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=oem-gce-enable-oslogin comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.864950 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Oct 2 19:28:27.868575 systemd[1]: Finished systemd-update-utmp.service. Oct 2 19:28:27.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:27.962999 systemd-resolved[1075]: Positive Trust Anchors: Oct 2 19:28:27.963025 systemd-resolved[1075]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Oct 2 19:28:27.963089 systemd-resolved[1075]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Oct 2 19:28:27.986000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Oct 2 19:28:27.986000 audit[1090]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff60298300 a2=420 a3=0 items=0 ppid=1060 pid=1090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:27.986000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Oct 2 19:28:27.989087 augenrules[1090]: No rules Oct 2 19:28:27.989771 systemd[1]: Finished audit-rules.service. Oct 2 19:28:28.006613 systemd[1]: Finished systemd-journal-catalog-update.service. Oct 2 19:28:28.023161 systemd-resolved[1075]: Defaulting to hostname 'linux'. Oct 2 19:28:28.026210 systemd[1]: Started systemd-resolved.service. Oct 2 19:28:28.035119 systemd[1]: Reached target network.target. Oct 2 19:28:28.043968 systemd[1]: Reached target nss-lookup.target. Oct 2 19:28:28.103961 systemd[1]: Started systemd-timesyncd.service. Oct 2 19:28:28.105779 systemd-timesyncd[1078]: Contacted time server 169.254.169.254:123 (169.254.169.254). Oct 2 19:28:28.105879 systemd-timesyncd[1078]: Initial clock synchronization to Mon 2023-10-02 19:28:28.148695 UTC. Oct 2 19:28:28.114830 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Oct 2 19:28:28.115771 systemd[1]: Finished systemd-machine-id-commit.service. Oct 2 19:28:28.124106 systemd[1]: Reached target time-set.target. Oct 2 19:28:28.193744 ldconfig[1046]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Oct 2 19:28:28.200156 systemd[1]: Finished ldconfig.service. Oct 2 19:28:28.208779 systemd[1]: Starting systemd-update-done.service... Oct 2 19:28:28.218922 systemd[1]: Finished systemd-update-done.service. Oct 2 19:28:28.228152 systemd[1]: Reached target sysinit.target. Oct 2 19:28:28.237107 systemd[1]: Started motdgen.path. Oct 2 19:28:28.244068 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Oct 2 19:28:28.254224 systemd[1]: Started logrotate.timer. Oct 2 19:28:28.261189 systemd[1]: Started mdadm.timer. Oct 2 19:28:28.268004 systemd[1]: Started systemd-tmpfiles-clean.timer. Oct 2 19:28:28.276011 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Oct 2 19:28:28.276078 systemd[1]: Reached target paths.target. Oct 2 19:28:28.282991 systemd[1]: Reached target timers.target. Oct 2 19:28:28.290403 systemd[1]: Listening on dbus.socket. Oct 2 19:28:28.299370 systemd[1]: Starting docker.socket... Oct 2 19:28:28.309878 systemd[1]: Listening on sshd.socket. Oct 2 19:28:28.317176 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:28:28.317984 systemd[1]: Listening on docker.socket. Oct 2 19:28:28.325158 systemd[1]: Reached target sockets.target. Oct 2 19:28:28.333988 systemd[1]: Reached target basic.target. Oct 2 19:28:28.341032 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:28:28.341087 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Oct 2 19:28:28.342679 systemd[1]: Starting containerd.service... Oct 2 19:28:28.351433 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Oct 2 19:28:28.361938 systemd[1]: Starting dbus.service... Oct 2 19:28:28.370181 systemd[1]: Starting enable-oem-cloudinit.service... Oct 2 19:28:28.378832 systemd[1]: Starting extend-filesystems.service... Oct 2 19:28:28.391152 jq[1102]: false Oct 2 19:28:28.385978 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Oct 2 19:28:28.387898 systemd[1]: Starting motdgen.service... Oct 2 19:28:28.396781 systemd[1]: Starting oem-gce.service... Oct 2 19:28:28.405693 systemd[1]: Starting prepare-cni-plugins.service... Oct 2 19:28:28.414770 systemd[1]: Starting prepare-critools.service... Oct 2 19:28:28.423922 systemd[1]: Starting ssh-key-proc-cmdline.service... Oct 2 19:28:28.436651 systemd[1]: Starting sshd-keygen.service... Oct 2 19:28:28.448983 systemd[1]: Starting systemd-logind.service... Oct 2 19:28:28.455994 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Oct 2 19:28:28.456119 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Oct 2 19:28:28.456914 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Oct 2 19:28:28.457578 extend-filesystems[1103]: Found sda Oct 2 19:28:28.458207 systemd[1]: Starting update-engine.service... Oct 2 19:28:28.472877 systemd[1]: Starting update-ssh-keys-after-ignition.service... Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda1 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda2 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda3 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found usr Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda4 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda6 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda7 Oct 2 19:28:28.475280 extend-filesystems[1103]: Found sda9 Oct 2 19:28:28.475280 extend-filesystems[1103]: Checking size of /dev/sda9 Oct 2 19:28:28.539978 dbus-daemon[1101]: [system] SELinux support is enabled Oct 2 19:28:28.563542 jq[1125]: true Oct 2 19:28:28.484432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Oct 2 19:28:28.563774 extend-filesystems[1103]: Resized partition /dev/sda9 Oct 2 19:28:28.555335 dbus-daemon[1101]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1020 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Oct 2 19:28:28.577929 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Oct 2 19:28:28.484713 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Oct 2 19:28:28.578453 extend-filesystems[1140]: resize2fs 1.46.5 (30-Dec-2021) Oct 2 19:28:28.486011 systemd[1]: motdgen.service: Deactivated successfully. Oct 2 19:28:28.486284 systemd[1]: Finished motdgen.service. Oct 2 19:28:28.593537 tar[1130]: ./ Oct 2 19:28:28.593537 tar[1130]: ./macvlan Oct 2 19:28:28.506131 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Oct 2 19:28:28.594143 mkfs.ext4[1137]: mke2fs 1.46.5 (30-Dec-2021) Oct 2 19:28:28.594143 mkfs.ext4[1137]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Oct 2 19:28:28.594143 mkfs.ext4[1137]: Creating filesystem with 262144 4k blocks and 65536 inodes Oct 2 19:28:28.594143 mkfs.ext4[1137]: Filesystem UUID: 30c4cc21-deb7-42c4-899f-d61a2b2295f7 Oct 2 19:28:28.594143 mkfs.ext4[1137]: Superblock backups stored on blocks: Oct 2 19:28:28.594143 mkfs.ext4[1137]: 32768, 98304, 163840, 229376 Oct 2 19:28:28.594143 mkfs.ext4[1137]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:28:28.594143 mkfs.ext4[1137]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:28:28.594143 mkfs.ext4[1137]: Creating journal (8192 blocks): done Oct 2 19:28:28.594143 mkfs.ext4[1137]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Oct 2 19:28:28.506395 systemd[1]: Finished ssh-key-proc-cmdline.service. Oct 2 19:28:28.595038 jq[1134]: true Oct 2 19:28:28.540239 systemd[1]: Started dbus.service. Oct 2 19:28:28.594578 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Oct 2 19:28:28.594643 systemd[1]: Reached target system-config.target. Oct 2 19:28:28.597215 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.systemd1' Oct 2 19:28:28.603042 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Oct 2 19:28:28.603086 systemd[1]: Reached target user-config.target. Oct 2 19:28:28.623555 systemd[1]: Starting systemd-hostnamed.service... Oct 2 19:28:28.631365 tar[1132]: crictl Oct 2 19:28:28.638855 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Oct 2 19:28:28.654581 umount[1150]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Oct 2 19:28:28.657880 extend-filesystems[1140]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Oct 2 19:28:28.657880 extend-filesystems[1140]: old_desc_blocks = 1, new_desc_blocks = 2 Oct 2 19:28:28.657880 extend-filesystems[1140]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Oct 2 19:28:28.711002 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 19:28:28.711054 extend-filesystems[1103]: Resized filesystem in /dev/sda9 Oct 2 19:28:28.660200 systemd[1]: extend-filesystems.service: Deactivated successfully. Oct 2 19:28:28.721147 update_engine[1124]: I1002 19:28:28.719322 1124 main.cc:92] Flatcar Update Engine starting Oct 2 19:28:28.660456 systemd[1]: Finished extend-filesystems.service. Oct 2 19:28:28.735851 kernel: EXT4-fs (loop0): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Oct 2 19:28:28.737701 systemd[1]: Started update-engine.service. Oct 2 19:28:28.738538 update_engine[1124]: I1002 19:28:28.738374 1124 update_check_scheduler.cc:74] Next update check in 2m25s Oct 2 19:28:28.749420 systemd[1]: Started locksmithd.service. Oct 2 19:28:28.798489 env[1136]: time="2023-10-02T19:28:28.798420084Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Oct 2 19:28:28.821939 bash[1169]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:28:28.823576 systemd[1]: Finished update-ssh-keys-after-ignition.service. Oct 2 19:28:28.839652 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.hostname1' Oct 2 19:28:28.839915 systemd[1]: Started systemd-hostnamed.service. Oct 2 19:28:28.840916 dbus-daemon[1101]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1151 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Oct 2 19:28:28.854331 systemd[1]: Starting polkit.service... Oct 2 19:28:28.936876 env[1136]: time="2023-10-02T19:28:28.907240195Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Oct 2 19:28:28.937316 env[1136]: time="2023-10-02T19:28:28.937275042Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.942625 env[1136]: time="2023-10-02T19:28:28.942547143Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.132-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:28:28.951985 systemd-networkd[1020]: eth0: Gained IPv6LL Oct 2 19:28:28.962377 coreos-metadata[1100]: Oct 02 19:28:28.962 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Oct 2 19:28:28.966836 coreos-metadata[1100]: Oct 02 19:28:28.966 INFO Fetch failed with 404: resource not found Oct 2 19:28:28.966836 coreos-metadata[1100]: Oct 02 19:28:28.966 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Oct 2 19:28:28.967155 env[1136]: time="2023-10-02T19:28:28.967107612Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.967680 coreos-metadata[1100]: Oct 02 19:28:28.967 INFO Fetch successful Oct 2 19:28:28.967680 coreos-metadata[1100]: Oct 02 19:28:28.967 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Oct 2 19:28:28.968471 coreos-metadata[1100]: Oct 02 19:28:28.968 INFO Fetch failed with 404: resource not found Oct 2 19:28:28.968471 coreos-metadata[1100]: Oct 02 19:28:28.968 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Oct 2 19:28:28.969041 env[1136]: time="2023-10-02T19:28:28.968976871Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:28:28.969347 coreos-metadata[1100]: Oct 02 19:28:28.969 INFO Fetch failed with 404: resource not found Oct 2 19:28:28.969347 coreos-metadata[1100]: Oct 02 19:28:28.969 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Oct 2 19:28:28.970551 coreos-metadata[1100]: Oct 02 19:28:28.970 INFO Fetch successful Oct 2 19:28:28.972697 unknown[1100]: wrote ssh authorized keys file for user: core Oct 2 19:28:28.979843 env[1136]: time="2023-10-02T19:28:28.979769498Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.990626 env[1136]: time="2023-10-02T19:28:28.990546176Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Oct 2 19:28:28.990795 env[1136]: time="2023-10-02T19:28:28.990772495Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.991428 tar[1130]: ./static Oct 2 19:28:28.992029 env[1136]: time="2023-10-02T19:28:28.991970382Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.992613 env[1136]: time="2023-10-02T19:28:28.992584073Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Oct 2 19:28:28.993144 env[1136]: time="2023-10-02T19:28:28.993098914Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Oct 2 19:28:28.993288 env[1136]: time="2023-10-02T19:28:28.993263909Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Oct 2 19:28:28.993518 env[1136]: time="2023-10-02T19:28:28.993490099Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Oct 2 19:28:28.993685 env[1136]: time="2023-10-02T19:28:28.993661851Z" level=info msg="metadata content store policy set" policy=shared Oct 2 19:28:28.998185 systemd-logind[1121]: Watching system buttons on /dev/input/event1 (Power Button) Oct 2 19:28:28.998840 update-ssh-keys[1178]: Updated "/home/core/.ssh/authorized_keys" Oct 2 19:28:28.999745 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Oct 2 19:28:29.000279 systemd-logind[1121]: Watching system buttons on /dev/input/event2 (Sleep Button) Oct 2 19:28:29.000452 systemd-logind[1121]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Oct 2 19:28:29.001119 systemd-logind[1121]: New seat seat0. Oct 2 19:28:29.003693 env[1136]: time="2023-10-02T19:28:29.003633898Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Oct 2 19:28:29.004043 env[1136]: time="2023-10-02T19:28:29.003990804Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Oct 2 19:28:29.004187 env[1136]: time="2023-10-02T19:28:29.004163460Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Oct 2 19:28:29.004386 env[1136]: time="2023-10-02T19:28:29.004359230Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.004592 env[1136]: time="2023-10-02T19:28:29.004570596Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.004742 env[1136]: time="2023-10-02T19:28:29.004721266Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.004877 env[1136]: time="2023-10-02T19:28:29.004852631Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.005042 env[1136]: time="2023-10-02T19:28:29.005018670Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.005153 env[1136]: time="2023-10-02T19:28:29.005132027Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.005265 env[1136]: time="2023-10-02T19:28:29.005244753Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.005394 env[1136]: time="2023-10-02T19:28:29.005351224Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.005500 env[1136]: time="2023-10-02T19:28:29.005478541Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Oct 2 19:28:29.005748 env[1136]: time="2023-10-02T19:28:29.005724555Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Oct 2 19:28:29.006029 env[1136]: time="2023-10-02T19:28:29.005994012Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Oct 2 19:28:29.006687 env[1136]: time="2023-10-02T19:28:29.006655509Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Oct 2 19:28:29.008934 env[1136]: time="2023-10-02T19:28:29.008897955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.009170 env[1136]: time="2023-10-02T19:28:29.009139248Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Oct 2 19:28:29.009465 env[1136]: time="2023-10-02T19:28:29.009439905Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.010218 env[1136]: time="2023-10-02T19:28:29.009808979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.012714 env[1136]: time="2023-10-02T19:28:29.012683453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.012908 env[1136]: time="2023-10-02T19:28:29.012881708Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.013068 env[1136]: time="2023-10-02T19:28:29.013044342Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.013263 env[1136]: time="2023-10-02T19:28:29.013238686Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.013447 env[1136]: time="2023-10-02T19:28:29.013423245Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.013566 env[1136]: time="2023-10-02T19:28:29.013545515Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.013901 env[1136]: time="2023-10-02T19:28:29.013873517Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Oct 2 19:28:29.014194 env[1136]: time="2023-10-02T19:28:29.014171907Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.015099 env[1136]: time="2023-10-02T19:28:29.015069377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.015226 env[1136]: time="2023-10-02T19:28:29.015204714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.015326 env[1136]: time="2023-10-02T19:28:29.015306814Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Oct 2 19:28:29.015587 env[1136]: time="2023-10-02T19:28:29.015560617Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Oct 2 19:28:29.016063 env[1136]: time="2023-10-02T19:28:29.016034536Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Oct 2 19:28:29.017067 systemd[1]: Started systemd-logind.service. Oct 2 19:28:29.017541 env[1136]: time="2023-10-02T19:28:29.017511710Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Oct 2 19:28:29.017934 env[1136]: time="2023-10-02T19:28:29.017903160Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Oct 2 19:28:29.022057 env[1136]: time="2023-10-02T19:28:29.021793949Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.022292644Z" level=info msg="Connect containerd service" Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.022361771Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.024443834Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.024897608Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.024966075Z" level=info msg=serving... address=/run/containerd/containerd.sock Oct 2 19:28:29.026777 env[1136]: time="2023-10-02T19:28:29.025035523Z" level=info msg="containerd successfully booted in 0.227888s" Oct 2 19:28:29.027425 systemd[1]: Started containerd.service. Oct 2 19:28:29.037977 env[1136]: time="2023-10-02T19:28:29.037915541Z" level=info msg="Start subscribing containerd event" Oct 2 19:28:29.038195 env[1136]: time="2023-10-02T19:28:29.038172447Z" level=info msg="Start recovering state" Oct 2 19:28:29.038377 env[1136]: time="2023-10-02T19:28:29.038359301Z" level=info msg="Start event monitor" Oct 2 19:28:29.038479 env[1136]: time="2023-10-02T19:28:29.038461460Z" level=info msg="Start snapshots syncer" Oct 2 19:28:29.038571 env[1136]: time="2023-10-02T19:28:29.038554669Z" level=info msg="Start cni network conf syncer for default" Oct 2 19:28:29.038675 env[1136]: time="2023-10-02T19:28:29.038656354Z" level=info msg="Start streaming server" Oct 2 19:28:29.046179 polkitd[1174]: Started polkitd version 121 Oct 2 19:28:29.077587 polkitd[1174]: Loading rules from directory /etc/polkit-1/rules.d Oct 2 19:28:29.077683 polkitd[1174]: Loading rules from directory /usr/share/polkit-1/rules.d Oct 2 19:28:29.079772 polkitd[1174]: Finished loading, compiling and executing 2 rules Oct 2 19:28:29.080423 dbus-daemon[1101]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Oct 2 19:28:29.080644 systemd[1]: Started polkit.service. Oct 2 19:28:29.081383 polkitd[1174]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Oct 2 19:28:29.110675 systemd-hostnamed[1151]: Hostname set to (transient) Oct 2 19:28:29.113745 systemd-resolved[1075]: System hostname changed to 'ci-3510-3-0-6b84b3caaf9723f6fd5e.c.flatcar-212911.internal'. Oct 2 19:28:29.159746 tar[1130]: ./vlan Oct 2 19:28:29.277325 tar[1130]: ./portmap Oct 2 19:28:29.386102 tar[1130]: ./host-local Oct 2 19:28:29.474850 tar[1130]: ./vrf Oct 2 19:28:29.566427 tar[1130]: ./bridge Oct 2 19:28:29.683068 tar[1130]: ./tuning Oct 2 19:28:29.780658 tar[1130]: ./firewall Oct 2 19:28:29.864306 systemd[1]: Finished prepare-critools.service. Oct 2 19:28:29.898713 tar[1130]: ./host-device Oct 2 19:28:30.002322 tar[1130]: ./sbr Oct 2 19:28:30.082185 tar[1130]: ./loopback Oct 2 19:28:30.127777 tar[1130]: ./dhcp Oct 2 19:28:30.277006 tar[1130]: ./ptp Oct 2 19:28:30.333246 tar[1130]: ./ipvlan Oct 2 19:28:30.388009 tar[1130]: ./bandwidth Oct 2 19:28:30.428422 sshd_keygen[1131]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Oct 2 19:28:30.470769 systemd[1]: Finished prepare-cni-plugins.service. Oct 2 19:28:30.496934 systemd[1]: Finished sshd-keygen.service. Oct 2 19:28:30.506570 systemd[1]: Starting issuegen.service... Oct 2 19:28:30.516968 systemd[1]: issuegen.service: Deactivated successfully. Oct 2 19:28:30.517217 systemd[1]: Finished issuegen.service. Oct 2 19:28:30.526574 systemd[1]: Starting systemd-user-sessions.service... Oct 2 19:28:30.542073 systemd[1]: Finished systemd-user-sessions.service. Oct 2 19:28:30.554429 systemd[1]: Started getty@tty1.service. Oct 2 19:28:30.564470 systemd[1]: Started serial-getty@ttyS0.service. Oct 2 19:28:30.574382 systemd[1]: Reached target getty.target. Oct 2 19:28:30.691143 locksmithd[1170]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Oct 2 19:28:33.549478 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Oct 2 19:28:35.857852 kernel: loop0: detected capacity change from 0 to 2097152 Oct 2 19:28:35.879214 systemd-nspawn[1209]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Oct 2 19:28:35.879214 systemd-nspawn[1209]: Press ^] three times within 1s to kill container. Oct 2 19:28:35.900877 kernel: EXT4-fs (loop0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Oct 2 19:28:36.002532 systemd[1]: Started oem-gce.service. Oct 2 19:28:36.003045 systemd[1]: Reached target multi-user.target. Oct 2 19:28:36.005285 systemd[1]: Starting systemd-update-utmp-runlevel.service... Oct 2 19:28:36.016592 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Oct 2 19:28:36.016884 systemd[1]: Finished systemd-update-utmp-runlevel.service. Oct 2 19:28:36.017169 systemd[1]: Startup finished in 1.021s (kernel) + 8.232s (initrd) + 14.972s (userspace) = 24.226s. Oct 2 19:28:36.084102 systemd-nspawn[1209]: + '[' -e /etc/default/instance_configs.cfg.template ']' Oct 2 19:28:36.084102 systemd-nspawn[1209]: + echo -e '[InstanceSetup]\nset_host_keys = false' Oct 2 19:28:36.084384 systemd-nspawn[1209]: + /usr/bin/google_instance_setup Oct 2 19:28:36.752132 instance-setup[1215]: INFO Running google_set_multiqueue. Oct 2 19:28:36.766984 instance-setup[1215]: INFO Set channels for eth0 to 2. Oct 2 19:28:36.770731 instance-setup[1215]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Oct 2 19:28:36.772177 instance-setup[1215]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Oct 2 19:28:36.772667 instance-setup[1215]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Oct 2 19:28:36.774168 instance-setup[1215]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Oct 2 19:28:36.774538 instance-setup[1215]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Oct 2 19:28:36.775962 instance-setup[1215]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Oct 2 19:28:36.776361 instance-setup[1215]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Oct 2 19:28:36.777839 instance-setup[1215]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Oct 2 19:28:36.789306 instance-setup[1215]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Oct 2 19:28:36.789672 instance-setup[1215]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Oct 2 19:28:36.830855 systemd-nspawn[1209]: + /usr/bin/google_metadata_script_runner --script-type startup Oct 2 19:28:37.166456 startup-script[1246]: INFO Starting startup scripts. Oct 2 19:28:37.179635 startup-script[1246]: INFO No startup scripts found in metadata. Oct 2 19:28:37.179848 startup-script[1246]: INFO Finished running startup scripts. Oct 2 19:28:37.214975 systemd-nspawn[1209]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Oct 2 19:28:37.215583 systemd-nspawn[1209]: + daemon_pids=() Oct 2 19:28:37.215583 systemd-nspawn[1209]: + for d in accounts clock_skew network Oct 2 19:28:37.215583 systemd-nspawn[1209]: + daemon_pids+=($!) Oct 2 19:28:37.215583 systemd-nspawn[1209]: + for d in accounts clock_skew network Oct 2 19:28:37.215878 systemd-nspawn[1209]: + daemon_pids+=($!) Oct 2 19:28:37.215878 systemd-nspawn[1209]: + for d in accounts clock_skew network Oct 2 19:28:37.216034 systemd-nspawn[1209]: + daemon_pids+=($!) Oct 2 19:28:37.216034 systemd-nspawn[1209]: + NOTIFY_SOCKET=/run/systemd/notify Oct 2 19:28:37.216034 systemd-nspawn[1209]: + /usr/bin/systemd-notify --ready Oct 2 19:28:37.216549 systemd-nspawn[1209]: + /usr/bin/google_clock_skew_daemon Oct 2 19:28:37.216840 systemd-nspawn[1209]: + /usr/bin/google_network_daemon Oct 2 19:28:37.217290 systemd-nspawn[1209]: + /usr/bin/google_accounts_daemon Oct 2 19:28:37.272651 systemd-nspawn[1209]: + wait -n 36 37 38 Oct 2 19:28:37.684757 systemd[1]: Created slice system-sshd.slice. Oct 2 19:28:37.687542 systemd[1]: Started sshd@0-10.128.0.55:22-147.75.109.163:49584.service. Oct 2 19:28:37.896396 google-clock-skew[1250]: INFO Starting Google Clock Skew daemon. Oct 2 19:28:37.912660 google-networking[1251]: INFO Starting Google Networking daemon. Oct 2 19:28:37.940599 google-clock-skew[1250]: INFO Clock drift token has changed: 0. Oct 2 19:28:37.947804 systemd-nspawn[1209]: hwclock: Cannot access the Hardware Clock via any known method. Oct 2 19:28:37.948008 systemd-nspawn[1209]: hwclock: Use the --verbose option to see the details of our search for an access method. Oct 2 19:28:37.948683 google-clock-skew[1250]: WARNING Failed to sync system time with hardware clock. Oct 2 19:28:38.033840 sshd[1256]: Accepted publickey for core from 147.75.109.163 port 49584 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:38.037912 sshd[1256]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:38.044442 groupadd[1264]: group added to /etc/group: name=google-sudoers, GID=1000 Oct 2 19:28:38.050646 groupadd[1264]: group added to /etc/gshadow: name=google-sudoers Oct 2 19:28:38.057376 systemd[1]: Created slice user-500.slice. Oct 2 19:28:38.059391 systemd[1]: Starting user-runtime-dir@500.service... Oct 2 19:28:38.063666 systemd-logind[1121]: New session 1 of user core. Oct 2 19:28:38.066082 groupadd[1264]: new group: name=google-sudoers, GID=1000 Oct 2 19:28:38.077507 systemd[1]: Finished user-runtime-dir@500.service. Oct 2 19:28:38.081086 systemd[1]: Starting user@500.service... Oct 2 19:28:38.088699 google-accounts[1249]: INFO Starting Google Accounts daemon. Oct 2 19:28:38.099635 (systemd)[1272]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:38.124749 google-accounts[1249]: WARNING OS Login not installed. Oct 2 19:28:38.125944 google-accounts[1249]: INFO Creating a new user account for 0. Oct 2 19:28:38.132449 systemd-nspawn[1209]: useradd: invalid user name '0': use --badname to ignore Oct 2 19:28:38.133272 google-accounts[1249]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Oct 2 19:28:38.212375 systemd[1272]: Queued start job for default target default.target. Oct 2 19:28:38.213322 systemd[1272]: Reached target paths.target. Oct 2 19:28:38.213363 systemd[1272]: Reached target sockets.target. Oct 2 19:28:38.213384 systemd[1272]: Reached target timers.target. Oct 2 19:28:38.213404 systemd[1272]: Reached target basic.target. Oct 2 19:28:38.213485 systemd[1272]: Reached target default.target. Oct 2 19:28:38.213540 systemd[1272]: Startup finished in 102ms. Oct 2 19:28:38.214456 systemd[1]: Started user@500.service. Oct 2 19:28:38.216001 systemd[1]: Started session-1.scope. Oct 2 19:28:38.438690 systemd[1]: Started sshd@1-10.128.0.55:22-147.75.109.163:49600.service. Oct 2 19:28:38.725950 sshd[1284]: Accepted publickey for core from 147.75.109.163 port 49600 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:38.727879 sshd[1284]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:38.734317 systemd-logind[1121]: New session 2 of user core. Oct 2 19:28:38.735160 systemd[1]: Started session-2.scope. Oct 2 19:28:38.945086 sshd[1284]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:38.949147 systemd[1]: sshd@1-10.128.0.55:22-147.75.109.163:49600.service: Deactivated successfully. Oct 2 19:28:38.950245 systemd[1]: session-2.scope: Deactivated successfully. Oct 2 19:28:38.951099 systemd-logind[1121]: Session 2 logged out. Waiting for processes to exit. Oct 2 19:28:38.952325 systemd-logind[1121]: Removed session 2. Oct 2 19:28:38.992981 systemd[1]: Started sshd@2-10.128.0.55:22-147.75.109.163:49610.service. Oct 2 19:28:39.286255 sshd[1291]: Accepted publickey for core from 147.75.109.163 port 49610 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:39.287922 sshd[1291]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:39.294585 systemd[1]: Started session-3.scope. Oct 2 19:28:39.295233 systemd-logind[1121]: New session 3 of user core. Oct 2 19:28:39.498249 sshd[1291]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:39.502532 systemd[1]: sshd@2-10.128.0.55:22-147.75.109.163:49610.service: Deactivated successfully. Oct 2 19:28:39.503571 systemd[1]: session-3.scope: Deactivated successfully. Oct 2 19:28:39.504451 systemd-logind[1121]: Session 3 logged out. Waiting for processes to exit. Oct 2 19:28:39.506127 systemd-logind[1121]: Removed session 3. Oct 2 19:28:39.544378 systemd[1]: Started sshd@3-10.128.0.55:22-147.75.109.163:49626.service. Oct 2 19:28:39.834229 sshd[1297]: Accepted publickey for core from 147.75.109.163 port 49626 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:39.836122 sshd[1297]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:39.842753 systemd[1]: Started session-4.scope. Oct 2 19:28:39.843427 systemd-logind[1121]: New session 4 of user core. Oct 2 19:28:40.049853 sshd[1297]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:40.053973 systemd[1]: sshd@3-10.128.0.55:22-147.75.109.163:49626.service: Deactivated successfully. Oct 2 19:28:40.055086 systemd[1]: session-4.scope: Deactivated successfully. Oct 2 19:28:40.055992 systemd-logind[1121]: Session 4 logged out. Waiting for processes to exit. Oct 2 19:28:40.057290 systemd-logind[1121]: Removed session 4. Oct 2 19:28:40.096262 systemd[1]: Started sshd@4-10.128.0.55:22-147.75.109.163:49628.service. Oct 2 19:28:40.387504 sshd[1303]: Accepted publickey for core from 147.75.109.163 port 49628 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:40.389187 sshd[1303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:40.395765 systemd[1]: Started session-5.scope. Oct 2 19:28:40.396405 systemd-logind[1121]: New session 5 of user core. Oct 2 19:28:40.589200 sudo[1306]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Oct 2 19:28:40.589592 sudo[1306]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:28:40.599078 dbus-daemon[1101]: \xd0}\x8c\x8fpU: received setenforce notice (enforcing=2018511680) Oct 2 19:28:40.601307 sudo[1306]: pam_unix(sudo:session): session closed for user root Oct 2 19:28:40.646329 sshd[1303]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:40.651439 systemd[1]: sshd@4-10.128.0.55:22-147.75.109.163:49628.service: Deactivated successfully. Oct 2 19:28:40.652703 systemd[1]: session-5.scope: Deactivated successfully. Oct 2 19:28:40.653571 systemd-logind[1121]: Session 5 logged out. Waiting for processes to exit. Oct 2 19:28:40.655119 systemd-logind[1121]: Removed session 5. Oct 2 19:28:40.693172 systemd[1]: Started sshd@5-10.128.0.55:22-147.75.109.163:49640.service. Oct 2 19:28:40.986318 sshd[1310]: Accepted publickey for core from 147.75.109.163 port 49640 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:40.987905 sshd[1310]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:40.994733 systemd[1]: Started session-6.scope. Oct 2 19:28:40.995727 systemd-logind[1121]: New session 6 of user core. Oct 2 19:28:41.167175 sudo[1314]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Oct 2 19:28:41.167566 sudo[1314]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:28:41.172001 sudo[1314]: pam_unix(sudo:session): session closed for user root Oct 2 19:28:41.184561 sudo[1313]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Oct 2 19:28:41.184966 sudo[1313]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:28:41.198429 systemd[1]: Stopping audit-rules.service... Oct 2 19:28:41.221042 kernel: kauditd_printk_skb: 181 callbacks suppressed Oct 2 19:28:41.221210 kernel: audit: type=1305 audit(1696274921.199:164): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:28:41.199000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Oct 2 19:28:41.201357 systemd[1]: audit-rules.service: Deactivated successfully. Oct 2 19:28:41.221521 auditctl[1317]: No rules Oct 2 19:28:41.201609 systemd[1]: Stopped audit-rules.service. Oct 2 19:28:41.207970 systemd[1]: Starting audit-rules.service... Oct 2 19:28:41.199000 audit[1317]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21015110 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:41.199000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:28:41.242476 systemd[1]: Finished audit-rules.service. Oct 2 19:28:41.254664 augenrules[1334]: No rules Oct 2 19:28:41.256778 sudo[1313]: pam_unix(sudo:session): session closed for user root Oct 2 19:28:41.262398 kernel: audit: type=1300 audit(1696274921.199:164): arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffe21015110 a2=420 a3=0 items=0 ppid=1 pid=1317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:41.262523 kernel: audit: type=1327 audit(1696274921.199:164): proctitle=2F7362696E2F617564697463746C002D44 Oct 2 19:28:41.262563 kernel: audit: type=1131 audit(1696274921.200:165): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.305934 kernel: audit: type=1130 audit(1696274921.242:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.306086 kernel: audit: type=1106 audit(1696274921.256:167): pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.256000 audit[1313]: USER_END pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.307267 sshd[1310]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:41.314466 systemd-logind[1121]: Session 6 logged out. Waiting for processes to exit. Oct 2 19:28:41.316853 systemd[1]: sshd@5-10.128.0.55:22-147.75.109.163:49640.service: Deactivated successfully. Oct 2 19:28:41.317946 systemd[1]: session-6.scope: Deactivated successfully. Oct 2 19:28:41.320046 systemd-logind[1121]: Removed session 6. Oct 2 19:28:41.256000 audit[1313]: CRED_DISP pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.353912 kernel: audit: type=1104 audit(1696274921.256:168): pid=1313 uid=500 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.355480 systemd[1]: Started sshd@6-10.128.0.55:22-147.75.109.163:49646.service. Oct 2 19:28:41.310000 audit[1310]: USER_END pid=1310 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.310000 audit[1310]: CRED_DISP pid=1310 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.391075 kernel: audit: type=1106 audit(1696274921.310:169): pid=1310 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.391159 kernel: audit: type=1104 audit(1696274921.310:170): pid=1310 uid=0 auid=500 ses=6 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.55:22-147.75.109.163:49640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.439061 kernel: audit: type=1131 audit(1696274921.316:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-10.128.0.55:22-147.75.109.163:49640 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.355000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.55:22-147.75.109.163:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.645000 audit[1340]: USER_ACCT pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_oslogin_admin,pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.648068 sshd[1340]: Accepted publickey for core from 147.75.109.163 port 49646 ssh2: RSA SHA256:uTzGMmjknFouvM49fa8EYeiBAz5hxIwhUzRriwXlGUg Oct 2 19:28:41.647000 audit[1340]: CRED_ACQ pid=1340 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.647000 audit[1340]: SYSCALL arch=c000003e syscall=1 success=yes exit=3 a0=5 a1=7ffc9703a1a0 a2=3 a3=0 items=0 ppid=1 pid=1340 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=7 comm="sshd" exe="/usr/sbin/sshd" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:41.647000 audit: PROCTITLE proctitle=737368643A20636F7265205B707269765D Oct 2 19:28:41.650020 sshd[1340]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Oct 2 19:28:41.656886 systemd[1]: Started session-7.scope. Oct 2 19:28:41.657678 systemd-logind[1121]: New session 7 of user core. Oct 2 19:28:41.664000 audit[1340]: USER_START pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.666000 audit[1342]: CRED_ACQ pid=1342 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:41.824000 audit[1343]: USER_ACCT pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.825078 sudo[1343]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Oct 2 19:28:41.824000 audit[1343]: CRED_REFR pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:41.825482 sudo[1343]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Oct 2 19:28:41.827000 audit[1343]: USER_START pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:42.416380 systemd[1]: Reloading. Oct 2 19:28:42.527106 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2023-10-02T19:28:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:28:42.527670 /usr/lib/systemd/system-generators/torcx-generator[1373]: time="2023-10-02T19:28:42Z" level=info msg="torcx already run" Oct 2 19:28:42.630191 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:28:42.630219 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:28:42.654090 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.745000 audit: BPF prog-id=37 op=LOAD Oct 2 19:28:42.745000 audit: BPF prog-id=28 op=UNLOAD Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.746000 audit: BPF prog-id=38 op=LOAD Oct 2 19:28:42.746000 audit: BPF prog-id=27 op=UNLOAD Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit: BPF prog-id=39 op=LOAD Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.777000 audit: BPF prog-id=40 op=LOAD Oct 2 19:28:42.777000 audit: BPF prog-id=21 op=UNLOAD Oct 2 19:28:42.777000 audit: BPF prog-id=22 op=UNLOAD Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.778000 audit: BPF prog-id=41 op=LOAD Oct 2 19:28:42.778000 audit: BPF prog-id=23 op=UNLOAD Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit: BPF prog-id=42 op=LOAD Oct 2 19:28:42.780000 audit: BPF prog-id=24 op=UNLOAD Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit: BPF prog-id=43 op=LOAD Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.780000 audit: BPF prog-id=44 op=LOAD Oct 2 19:28:42.780000 audit: BPF prog-id=25 op=UNLOAD Oct 2 19:28:42.780000 audit: BPF prog-id=26 op=UNLOAD Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.781000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit: BPF prog-id=45 op=LOAD Oct 2 19:28:42.782000 audit: BPF prog-id=18 op=UNLOAD Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit: BPF prog-id=46 op=LOAD Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.782000 audit: BPF prog-id=47 op=LOAD Oct 2 19:28:42.782000 audit: BPF prog-id=19 op=UNLOAD Oct 2 19:28:42.782000 audit: BPF prog-id=20 op=UNLOAD Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit: BPF prog-id=48 op=LOAD Oct 2 19:28:42.785000 audit: BPF prog-id=29 op=UNLOAD Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit: BPF prog-id=49 op=LOAD Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.785000 audit: BPF prog-id=50 op=LOAD Oct 2 19:28:42.786000 audit: BPF prog-id=30 op=UNLOAD Oct 2 19:28:42.786000 audit: BPF prog-id=31 op=UNLOAD Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit: BPF prog-id=51 op=LOAD Oct 2 19:28:42.787000 audit: BPF prog-id=32 op=UNLOAD Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit: BPF prog-id=52 op=LOAD Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.787000 audit: BPF prog-id=53 op=LOAD Oct 2 19:28:42.787000 audit: BPF prog-id=33 op=UNLOAD Oct 2 19:28:42.787000 audit: BPF prog-id=34 op=UNLOAD Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:42.789000 audit: BPF prog-id=54 op=LOAD Oct 2 19:28:42.789000 audit: BPF prog-id=35 op=UNLOAD Oct 2 19:28:42.805475 systemd[1]: Starting systemd-networkd-wait-online.service... Oct 2 19:28:42.813844 systemd[1]: Finished systemd-networkd-wait-online.service. Oct 2 19:28:42.813000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:42.814685 systemd[1]: Reached target network-online.target. Oct 2 19:28:42.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:42.817309 systemd[1]: Started kubelet.service. Oct 2 19:28:42.837726 systemd[1]: Starting coreos-metadata.service... Oct 2 19:28:42.918542 coreos-metadata[1424]: Oct 02 19:28:42.918 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/hostname: Attempt #1 Oct 2 19:28:42.923057 coreos-metadata[1424]: Oct 02 19:28:42.923 INFO Fetch successful Oct 2 19:28:42.923239 coreos-metadata[1424]: Oct 02 19:28:42.923 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/access-configs/0/external-ip: Attempt #1 Oct 2 19:28:42.924007 coreos-metadata[1424]: Oct 02 19:28:42.923 INFO Fetch successful Oct 2 19:28:42.924111 coreos-metadata[1424]: Oct 02 19:28:42.924 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/network-interfaces/0/ip: Attempt #1 Oct 2 19:28:42.924788 coreos-metadata[1424]: Oct 02 19:28:42.924 INFO Fetch successful Oct 2 19:28:42.924920 coreos-metadata[1424]: Oct 02 19:28:42.924 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/machine-type: Attempt #1 Oct 2 19:28:42.925616 coreos-metadata[1424]: Oct 02 19:28:42.925 INFO Fetch successful Oct 2 19:28:42.926104 kubelet[1416]: E1002 19:28:42.926049 1416 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Oct 2 19:28:42.930292 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Oct 2 19:28:42.930000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Oct 2 19:28:42.930516 systemd[1]: kubelet.service: Failed with result 'exit-code'. Oct 2 19:28:42.937884 systemd[1]: Finished coreos-metadata.service. Oct 2 19:28:42.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=coreos-metadata comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:43.389046 systemd[1]: Stopped kubelet.service. Oct 2 19:28:43.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:43.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:43.411625 systemd[1]: Reloading. Oct 2 19:28:43.521849 /usr/lib/systemd/system-generators/torcx-generator[1479]: time="2023-10-02T19:28:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.0 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.0 /var/lib/torcx/store]" Oct 2 19:28:43.522425 /usr/lib/systemd/system-generators/torcx-generator[1479]: time="2023-10-02T19:28:43Z" level=info msg="torcx already run" Oct 2 19:28:43.616610 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Oct 2 19:28:43.616638 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Oct 2 19:28:43.640554 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.733000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.734000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.734000 audit: BPF prog-id=55 op=LOAD Oct 2 19:28:43.734000 audit: BPF prog-id=37 op=UNLOAD Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.735000 audit: BPF prog-id=56 op=LOAD Oct 2 19:28:43.735000 audit: BPF prog-id=38 op=UNLOAD Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit: BPF prog-id=57 op=LOAD Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.738000 audit: BPF prog-id=58 op=LOAD Oct 2 19:28:43.738000 audit: BPF prog-id=39 op=UNLOAD Oct 2 19:28:43.738000 audit: BPF prog-id=40 op=UNLOAD Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.739000 audit: BPF prog-id=59 op=LOAD Oct 2 19:28:43.739000 audit: BPF prog-id=41 op=UNLOAD Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit: BPF prog-id=60 op=LOAD Oct 2 19:28:43.741000 audit: BPF prog-id=42 op=UNLOAD Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit: BPF prog-id=61 op=LOAD Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.741000 audit: BPF prog-id=62 op=LOAD Oct 2 19:28:43.741000 audit: BPF prog-id=43 op=UNLOAD Oct 2 19:28:43.741000 audit: BPF prog-id=44 op=UNLOAD Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.742000 audit: BPF prog-id=63 op=LOAD Oct 2 19:28:43.743000 audit: BPF prog-id=45 op=UNLOAD Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit: BPF prog-id=64 op=LOAD Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.743000 audit: BPF prog-id=65 op=LOAD Oct 2 19:28:43.743000 audit: BPF prog-id=46 op=UNLOAD Oct 2 19:28:43.743000 audit: BPF prog-id=47 op=UNLOAD Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit: BPF prog-id=66 op=LOAD Oct 2 19:28:43.746000 audit: BPF prog-id=48 op=UNLOAD Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit: BPF prog-id=67 op=LOAD Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.746000 audit: BPF prog-id=68 op=LOAD Oct 2 19:28:43.746000 audit: BPF prog-id=49 op=UNLOAD Oct 2 19:28:43.746000 audit: BPF prog-id=50 op=UNLOAD Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit: BPF prog-id=69 op=LOAD Oct 2 19:28:43.748000 audit: BPF prog-id=51 op=UNLOAD Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit: BPF prog-id=70 op=LOAD Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.748000 audit: BPF prog-id=71 op=LOAD Oct 2 19:28:43.748000 audit: BPF prog-id=52 op=UNLOAD Oct 2 19:28:43.748000 audit: BPF prog-id=53 op=UNLOAD Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.749000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.750000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:43.750000 audit: BPF prog-id=72 op=LOAD Oct 2 19:28:43.750000 audit: BPF prog-id=54 op=UNLOAD Oct 2 19:28:43.772336 systemd[1]: Started kubelet.service. Oct 2 19:28:43.772000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:43.835160 kubelet[1523]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:28:43.835589 kubelet[1523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:28:43.835652 kubelet[1523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:28:43.835795 kubelet[1523]: I1002 19:28:43.835763 1523 server.go:200] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Oct 2 19:28:43.837361 kubelet[1523]: Flag --container-runtime has been deprecated, will be removed in 1.27 as the only valid value is 'remote' Oct 2 19:28:43.837486 kubelet[1523]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Oct 2 19:28:43.837566 kubelet[1523]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Oct 2 19:28:44.831723 kubelet[1523]: I1002 19:28:44.831679 1523 server.go:413] "Kubelet version" kubeletVersion="v1.25.10" Oct 2 19:28:44.831723 kubelet[1523]: I1002 19:28:44.831715 1523 server.go:415] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Oct 2 19:28:44.832121 kubelet[1523]: I1002 19:28:44.832076 1523 server.go:825] "Client rotation is on, will bootstrap in background" Oct 2 19:28:44.839088 kubelet[1523]: I1002 19:28:44.839054 1523 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Oct 2 19:28:44.844934 kubelet[1523]: I1002 19:28:44.844884 1523 server.go:660] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Oct 2 19:28:44.845253 kubelet[1523]: I1002 19:28:44.845196 1523 container_manager_linux.go:262] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Oct 2 19:28:44.845356 kubelet[1523]: I1002 19:28:44.845314 1523 container_manager_linux.go:267] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container ExperimentalCPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none} Oct 2 19:28:44.845356 kubelet[1523]: I1002 19:28:44.845344 1523 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Oct 2 19:28:44.845596 kubelet[1523]: I1002 19:28:44.845365 1523 container_manager_linux.go:302] "Creating device plugin manager" devicePluginEnabled=true Oct 2 19:28:44.845596 kubelet[1523]: I1002 19:28:44.845568 1523 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:28:44.849118 kubelet[1523]: I1002 19:28:44.849069 1523 kubelet.go:381] "Attempting to sync node with API server" Oct 2 19:28:44.849118 kubelet[1523]: I1002 19:28:44.849102 1523 kubelet.go:270] "Adding static pod path" path="/etc/kubernetes/manifests" Oct 2 19:28:44.849315 kubelet[1523]: I1002 19:28:44.849132 1523 kubelet.go:281] "Adding apiserver pod source" Oct 2 19:28:44.849315 kubelet[1523]: I1002 19:28:44.849149 1523 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Oct 2 19:28:44.849989 kubelet[1523]: E1002 19:28:44.849947 1523 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.850092 kubelet[1523]: E1002 19:28:44.850046 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:44.851114 kubelet[1523]: I1002 19:28:44.851090 1523 kuberuntime_manager.go:240] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Oct 2 19:28:44.854986 kubelet[1523]: W1002 19:28:44.854946 1523 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Oct 2 19:28:44.855475 kubelet[1523]: I1002 19:28:44.855433 1523 server.go:1175] "Started kubelet" Oct 2 19:28:44.860454 kubelet[1523]: E1002 19:28:44.860426 1523 cri_stats_provider.go:452] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Oct 2 19:28:44.860454 kubelet[1523]: E1002 19:28:44.860464 1523 kubelet.go:1317] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Oct 2 19:28:44.860960 kubelet[1523]: I1002 19:28:44.860936 1523 server.go:155] "Starting to listen" address="0.0.0.0" port=10250 Oct 2 19:28:44.860000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:44.860000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:28:44.860000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c0000cf8f0 a1=c0007e5da0 a2=c0000cf8c0 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.860000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:28:44.861912 kubelet[1523]: I1002 19:28:44.861880 1523 server.go:438] "Adding debug handlers to kubelet server" Oct 2 19:28:44.862239 kubelet[1523]: I1002 19:28:44.862221 1523 kubelet.go:1274] "Unprivileged containerized plugins might not work, could not set selinux context on plugin registration dir" path="/var/lib/kubelet/plugins_registry" err="setxattr /var/lib/kubelet/plugins_registry: invalid argument" Oct 2 19:28:44.861000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:44.861000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:28:44.861000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000474fa0 a1=c0007e5db8 a2=c0000cf980 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.861000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:28:44.864109 kubelet[1523]: I1002 19:28:44.863995 1523 kubelet.go:1278] "Unprivileged containerized plugins might not work, could not set selinux context on plugins dir" path="/var/lib/kubelet/plugins" err="setxattr /var/lib/kubelet/plugins: invalid argument" Oct 2 19:28:44.864109 kubelet[1523]: I1002 19:28:44.864093 1523 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Oct 2 19:28:44.865651 kubelet[1523]: I1002 19:28:44.865623 1523 volume_manager.go:293] "Starting Kubelet Volume Manager" Oct 2 19:28:44.866006 kubelet[1523]: I1002 19:28:44.865931 1523 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Oct 2 19:28:44.869562 kubelet[1523]: W1002 19:28:44.866558 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:44.869562 kubelet[1523]: E1002 19:28:44.866631 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:44.869562 kubelet[1523]: W1002 19:28:44.866711 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:44.869562 kubelet[1523]: E1002 19:28:44.866726 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:44.869849 kubelet[1523]: E1002 19:28:44.866756 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e9168b903", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 855408899, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 855408899, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.869849 kubelet[1523]: E1002 19:28:44.867151 1523 controller.go:144] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:44.869849 kubelet[1523]: W1002 19:28:44.867206 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:44.870103 kubelet[1523]: E1002 19:28:44.867222 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:44.870103 kubelet[1523]: E1002 19:28:44.868277 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e91b5a470", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 860449904, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 860449904, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.870103 kubelet[1523]: E1002 19:28:44.868485 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:44.920000 kubelet[1523]: I1002 19:28:44.919965 1523 cpu_manager.go:213] "Starting CPU manager" policy="none" Oct 2 19:28:44.920000 kubelet[1523]: I1002 19:28:44.919994 1523 cpu_manager.go:214] "Reconciling" reconcilePeriod="10s" Oct 2 19:28:44.920205 kubelet[1523]: I1002 19:28:44.920022 1523 state_mem.go:36] "Initialized new in-memory state store" Oct 2 19:28:44.920542 kubelet[1523]: E1002 19:28:44.920437 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.921676 kubelet[1523]: E1002 19:28:44.921542 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.923547 kubelet[1523]: E1002 19:28:44.923450 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.925955 kubelet[1523]: I1002 19:28:44.925929 1523 policy_none.go:49] "None policy: Start" Oct 2 19:28:44.930248 kubelet[1523]: I1002 19:28:44.930227 1523 memory_manager.go:168] "Starting memorymanager" policy="None" Oct 2 19:28:44.930413 kubelet[1523]: I1002 19:28:44.930398 1523 state_mem.go:35] "Initializing new in-memory state store" Oct 2 19:28:44.939614 systemd[1]: Created slice kubepods.slice. Oct 2 19:28:44.945000 audit[1540]: NETFILTER_CFG table=mangle:2 family=2 entries=2 op=nft_register_chain pid=1540 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:44.945000 audit[1540]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffe82101d10 a2=0 a3=7ffe82101cfc items=0 ppid=1523 pid=1540 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.945000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:28:44.947000 audit[1543]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1543 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:44.947000 audit[1543]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7ffe24b26480 a2=0 a3=7ffe24b2646c items=0 ppid=1523 pid=1543 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.947000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:28:44.949718 systemd[1]: Created slice kubepods-besteffort.slice. Oct 2 19:28:44.959711 systemd[1]: Created slice kubepods-burstable.slice. Oct 2 19:28:44.962085 kubelet[1523]: I1002 19:28:44.962054 1523 manager.go:447] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Oct 2 19:28:44.961000 audit[1523]: AVC avc: denied { mac_admin } for pid=1523 comm="kubelet" capability=33 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:28:44.961000 audit: SELINUX_ERR op=setxattr invalid_context="system_u:object_r:container_file_t:s0" Oct 2 19:28:44.961000 audit[1523]: SYSCALL arch=c000003e syscall=188 success=no exit=-22 a0=c000bb9110 a1=c0008c3d28 a2=c000bb90e0 a3=25 items=0 ppid=1 pid=1523 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="kubelet" exe="/opt/bin/kubelet" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.961000 audit: PROCTITLE proctitle=2F6F70742F62696E2F6B7562656C6574002D2D626F6F7473747261702D6B756265636F6E6669673D2F6574632F6B756265726E657465732F626F6F7473747261702D6B7562656C65742E636F6E66002D2D6B756265636F6E6669673D2F6574632F6B756265726E657465732F6B7562656C65742E636F6E66002D2D636F6E6669 Oct 2 19:28:44.963267 kubelet[1523]: I1002 19:28:44.963226 1523 server.go:86] "Unprivileged containerized plugins might not work. Could not set selinux context on socket dir" path="/var/lib/kubelet/device-plugins/" err="setxattr /var/lib/kubelet/device-plugins/: invalid argument" Oct 2 19:28:44.964052 kubelet[1523]: I1002 19:28:44.963629 1523 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Oct 2 19:28:44.965029 kubelet[1523]: E1002 19:28:44.965003 1523 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.55\" not found" Oct 2 19:28:44.966655 kubelet[1523]: E1002 19:28:44.966630 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:44.967246 kubelet[1523]: I1002 19:28:44.967221 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:44.968920 kubelet[1523]: E1002 19:28:44.968777 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:44.970194 kubelet[1523]: E1002 19:28:44.969885 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 967161240, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.971309 kubelet[1523]: E1002 19:28:44.971196 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 967168831, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.974054 kubelet[1523]: E1002 19:28:44.973932 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 967173688, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.953000 audit[1545]: NETFILTER_CFG table=filter:4 family=2 entries=2 op=nft_register_chain pid=1545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:44.953000 audit[1545]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7ffe74651da0 a2=0 a3=7ffe74651d8c items=0 ppid=1523 pid=1545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.953000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:28:44.975044 kubelet[1523]: E1002 19:28:44.974952 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e982ea4c6", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 969043142, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 969043142, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:44.977000 audit[1550]: NETFILTER_CFG table=filter:5 family=2 entries=2 op=nft_register_chain pid=1550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:44.977000 audit[1550]: SYSCALL arch=c000003e syscall=46 success=yes exit=312 a0=3 a1=7fffb6a57f20 a2=0 a3=7fffb6a57f0c items=0 ppid=1523 pid=1550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:44.977000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Oct 2 19:28:45.035000 audit[1555]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_rule pid=1555 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.035000 audit[1555]: SYSCALL arch=c000003e syscall=46 success=yes exit=924 a0=3 a1=7ffe28f2a630 a2=0 a3=7ffe28f2a61c items=0 ppid=1523 pid=1555 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.035000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Oct 2 19:28:45.038000 audit[1556]: NETFILTER_CFG table=nat:7 family=2 entries=2 op=nft_register_chain pid=1556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.038000 audit[1556]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7fff96720f80 a2=0 a3=7fff96720f6c items=0 ppid=1523 pid=1556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.038000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:28:45.045000 audit[1559]: NETFILTER_CFG table=nat:8 family=2 entries=1 op=nft_register_rule pid=1559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.045000 audit[1559]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc232f6d60 a2=0 a3=7ffc232f6d4c items=0 ppid=1523 pid=1559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.045000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:28:45.051000 audit[1562]: NETFILTER_CFG table=filter:9 family=2 entries=1 op=nft_register_rule pid=1562 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.051000 audit[1562]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7ffe17f44b80 a2=0 a3=7ffe17f44b6c items=0 ppid=1523 pid=1562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.051000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:28:45.053000 audit[1563]: NETFILTER_CFG table=nat:10 family=2 entries=1 op=nft_register_chain pid=1563 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.053000 audit[1563]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7fff51b2ab60 a2=0 a3=7fff51b2ab4c items=0 ppid=1523 pid=1563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.053000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:28:45.055000 audit[1564]: NETFILTER_CFG table=nat:11 family=2 entries=1 op=nft_register_chain pid=1564 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.055000 audit[1564]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffb4d3bb80 a2=0 a3=7fffb4d3bb6c items=0 ppid=1523 pid=1564 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:28:45.058000 audit[1566]: NETFILTER_CFG table=nat:12 family=2 entries=1 op=nft_register_rule pid=1566 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.058000 audit[1566]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffd2713e070 a2=0 a3=7ffd2713e05c items=0 ppid=1523 pid=1566 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.058000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:28:45.067134 kubelet[1523]: E1002 19:28:45.067092 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.068689 kubelet[1523]: E1002 19:28:45.068636 1523 controller.go:144] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:45.061000 audit[1568]: NETFILTER_CFG table=nat:13 family=2 entries=2 op=nft_register_chain pid=1568 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.061000 audit[1568]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffed3940ec0 a2=0 a3=7ffed3940eac items=0 ppid=1523 pid=1568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.061000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:28:45.092000 audit[1571]: NETFILTER_CFG table=nat:14 family=2 entries=1 op=nft_register_rule pid=1571 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.092000 audit[1571]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7ffe8f6f0460 a2=0 a3=7ffe8f6f044c items=0 ppid=1523 pid=1571 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.092000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:28:45.096000 audit[1573]: NETFILTER_CFG table=nat:15 family=2 entries=1 op=nft_register_rule pid=1573 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.096000 audit[1573]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7fff1811f6c0 a2=0 a3=7fff1811f6ac items=0 ppid=1523 pid=1573 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.096000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:28:45.107000 audit[1576]: NETFILTER_CFG table=nat:16 family=2 entries=1 op=nft_register_rule pid=1576 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.107000 audit[1576]: SYSCALL arch=c000003e syscall=46 success=yes exit=540 a0=3 a1=7ffdb51d7ec0 a2=0 a3=7ffdb51d7eac items=0 ppid=1523 pid=1576 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.107000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:28:45.109468 kubelet[1523]: I1002 19:28:45.109425 1523 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Oct 2 19:28:45.110000 audit[1577]: NETFILTER_CFG table=mangle:17 family=10 entries=2 op=nft_register_chain pid=1577 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.110000 audit[1577]: SYSCALL arch=c000003e syscall=46 success=yes exit=136 a0=3 a1=7ffc87457d40 a2=0 a3=7ffc87457d2c items=0 ppid=1523 pid=1577 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.110000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Oct 2 19:28:45.111000 audit[1578]: NETFILTER_CFG table=mangle:18 family=2 entries=1 op=nft_register_chain pid=1578 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.111000 audit[1578]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffcc5ab4aa0 a2=0 a3=7ffcc5ab4a8c items=0 ppid=1523 pid=1578 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.111000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:28:45.112000 audit[1579]: NETFILTER_CFG table=nat:19 family=10 entries=2 op=nft_register_chain pid=1579 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.112000 audit[1579]: SYSCALL arch=c000003e syscall=46 success=yes exit=124 a0=3 a1=7ffeeceb7bf0 a2=0 a3=7ffeeceb7bdc items=0 ppid=1523 pid=1579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.112000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D44524F50002D74006E6174 Oct 2 19:28:45.113000 audit[1580]: NETFILTER_CFG table=nat:20 family=2 entries=1 op=nft_register_chain pid=1580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.113000 audit[1580]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe5c5d2250 a2=0 a3=7ffe5c5d223c items=0 ppid=1523 pid=1580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.113000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:28:45.115000 audit[1582]: NETFILTER_CFG table=filter:21 family=2 entries=1 op=nft_register_chain pid=1582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:28:45.115000 audit[1582]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fffa02ee9d0 a2=0 a3=7fffa02ee9bc items=0 ppid=1523 pid=1582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.115000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:28:45.116000 audit[1583]: NETFILTER_CFG table=nat:22 family=10 entries=1 op=nft_register_rule pid=1583 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.116000 audit[1583]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffec25c7c60 a2=0 a3=7ffec25c7c4c items=0 ppid=1523 pid=1583 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.116000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D44524F50002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303038303030 Oct 2 19:28:45.117000 audit[1584]: NETFILTER_CFG table=filter:23 family=10 entries=2 op=nft_register_chain pid=1584 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.117000 audit[1584]: SYSCALL arch=c000003e syscall=46 success=yes exit=132 a0=3 a1=7fff24643310 a2=0 a3=7fff246432fc items=0 ppid=1523 pid=1584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.117000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Oct 2 19:28:45.121000 audit[1586]: NETFILTER_CFG table=filter:24 family=10 entries=1 op=nft_register_rule pid=1586 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.121000 audit[1586]: SYSCALL arch=c000003e syscall=46 success=yes exit=664 a0=3 a1=7fff628dd070 a2=0 a3=7fff628dd05c items=0 ppid=1523 pid=1586 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.121000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206669726577616C6C20666F722064726F7070696E67206D61726B6564207061636B657473002D6D006D61726B Oct 2 19:28:45.122000 audit[1587]: NETFILTER_CFG table=nat:25 family=10 entries=1 op=nft_register_chain pid=1587 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.122000 audit[1587]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffc91b3e620 a2=0 a3=7ffc91b3e60c items=0 ppid=1523 pid=1587 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.122000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4D41524B2D4D415351002D74006E6174 Oct 2 19:28:45.124000 audit[1588]: NETFILTER_CFG table=nat:26 family=10 entries=1 op=nft_register_chain pid=1588 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.124000 audit[1588]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffcf279d130 a2=0 a3=7ffcf279d11c items=0 ppid=1523 pid=1588 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.124000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Oct 2 19:28:45.127000 audit[1590]: NETFILTER_CFG table=nat:27 family=10 entries=1 op=nft_register_rule pid=1590 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.127000 audit[1590]: SYSCALL arch=c000003e syscall=46 success=yes exit=216 a0=3 a1=7ffc499c68e0 a2=0 a3=7ffc499c68cc items=0 ppid=1523 pid=1590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.127000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D4D41524B2D4D415351002D74006E6174002D6A004D41524B002D2D6F722D6D61726B0030783030303034303030 Oct 2 19:28:45.130000 audit[1592]: NETFILTER_CFG table=nat:28 family=10 entries=2 op=nft_register_chain pid=1592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.130000 audit[1592]: SYSCALL arch=c000003e syscall=46 success=yes exit=612 a0=3 a1=7ffd92fd4ad0 a2=0 a3=7ffd92fd4abc items=0 ppid=1523 pid=1592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.130000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Oct 2 19:28:45.134000 audit[1594]: NETFILTER_CFG table=nat:29 family=10 entries=1 op=nft_register_rule pid=1594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.134000 audit[1594]: SYSCALL arch=c000003e syscall=46 success=yes exit=364 a0=3 a1=7fff6c26a380 a2=0 a3=7fff6c26a36c items=0 ppid=1523 pid=1594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.134000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D006D61726B0000002D2D6D61726B00307830303030343030302F30783030303034303030002D6A0052455455524E Oct 2 19:28:45.137000 audit[1596]: NETFILTER_CFG table=nat:30 family=10 entries=1 op=nft_register_rule pid=1596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.137000 audit[1596]: SYSCALL arch=c000003e syscall=46 success=yes exit=220 a0=3 a1=7ffc6a036290 a2=0 a3=7ffc6a03627c items=0 ppid=1523 pid=1596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.137000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6A004D41524B002D2D786F722D6D61726B0030783030303034303030 Oct 2 19:28:45.142000 audit[1598]: NETFILTER_CFG table=nat:31 family=10 entries=1 op=nft_register_rule pid=1598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.142000 audit[1598]: SYSCALL arch=c000003e syscall=46 success=yes exit=556 a0=3 a1=7ffdad6d3810 a2=0 a3=7ffdad6d37fc items=0 ppid=1523 pid=1598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.142000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D41004B5542452D504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732073657276696365207472616666696320726571756972696E6720534E4154002D6A004D415351554552414445 Oct 2 19:28:45.143752 kubelet[1523]: I1002 19:28:45.143723 1523 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Oct 2 19:28:45.143893 kubelet[1523]: I1002 19:28:45.143763 1523 status_manager.go:161] "Starting to sync pod status with apiserver" Oct 2 19:28:45.143893 kubelet[1523]: I1002 19:28:45.143796 1523 kubelet.go:2010] "Starting kubelet main sync loop" Oct 2 19:28:45.144007 kubelet[1523]: E1002 19:28:45.143898 1523 kubelet.go:2034] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Oct 2 19:28:45.144000 audit[1599]: NETFILTER_CFG table=mangle:32 family=10 entries=1 op=nft_register_chain pid=1599 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.144000 audit[1599]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc2904b9c0 a2=0 a3=7ffc2904b9ac items=0 ppid=1523 pid=1599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.144000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Oct 2 19:28:45.146645 kubelet[1523]: W1002 19:28:45.146619 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:45.146799 kubelet[1523]: E1002 19:28:45.146783 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:45.146000 audit[1600]: NETFILTER_CFG table=nat:33 family=10 entries=1 op=nft_register_chain pid=1600 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.146000 audit[1600]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc431e3850 a2=0 a3=7ffc431e383c items=0 ppid=1523 pid=1600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.146000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Oct 2 19:28:45.148000 audit[1601]: NETFILTER_CFG table=filter:34 family=10 entries=1 op=nft_register_chain pid=1601 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:28:45.148000 audit[1601]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffc436c34b0 a2=0 a3=7ffc436c349c items=0 ppid=1523 pid=1601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:28:45.148000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Oct 2 19:28:45.167857 kubelet[1523]: E1002 19:28:45.167792 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.170241 kubelet[1523]: I1002 19:28:45.170211 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:45.172023 kubelet[1523]: E1002 19:28:45.171990 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:45.172317 kubelet[1523]: E1002 19:28:45.172215 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 170132419, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.173687 kubelet[1523]: E1002 19:28:45.173589 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 170152020, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.260669 kubelet[1523]: E1002 19:28:45.260551 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 170177056, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.268928 kubelet[1523]: E1002 19:28:45.268872 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.369112 kubelet[1523]: E1002 19:28:45.368973 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.469853 kubelet[1523]: E1002 19:28:45.469778 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.471351 kubelet[1523]: E1002 19:28:45.471296 1523 controller.go:144] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:45.569980 kubelet[1523]: E1002 19:28:45.569906 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.572988 kubelet[1523]: I1002 19:28:45.572930 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:45.574498 kubelet[1523]: E1002 19:28:45.574462 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:45.574667 kubelet[1523]: E1002 19:28:45.574453 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 572870187, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.660776 kubelet[1523]: E1002 19:28:45.660462 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 572889744, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.671067 kubelet[1523]: E1002 19:28:45.671007 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.771914 kubelet[1523]: E1002 19:28:45.771840 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.817715 kubelet[1523]: W1002 19:28:45.817663 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:45.817715 kubelet[1523]: E1002 19:28:45.817714 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:45.851323 kubelet[1523]: E1002 19:28:45.851247 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:45.857231 kubelet[1523]: W1002 19:28:45.857176 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:45.857231 kubelet[1523]: E1002 19:28:45.857228 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:45.860910 kubelet[1523]: E1002 19:28:45.860741 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 45, 572894913, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:45.872014 kubelet[1523]: E1002 19:28:45.871941 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:45.951314 kubelet[1523]: W1002 19:28:45.951152 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:45.951314 kubelet[1523]: E1002 19:28:45.951205 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:45.972991 kubelet[1523]: E1002 19:28:45.972926 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.073656 kubelet[1523]: E1002 19:28:46.073591 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.173241 kubelet[1523]: W1002 19:28:46.173189 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:46.173435 kubelet[1523]: E1002 19:28:46.173354 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:46.174232 kubelet[1523]: E1002 19:28:46.174164 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.273452 kubelet[1523]: E1002 19:28:46.273306 1523 controller.go:144] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:46.274385 kubelet[1523]: E1002 19:28:46.274336 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.375352 kubelet[1523]: E1002 19:28:46.375288 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.376519 kubelet[1523]: I1002 19:28:46.376481 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:46.378353 kubelet[1523]: E1002 19:28:46.378316 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:46.378585 kubelet[1523]: E1002 19:28:46.378294 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 46, 376380032, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:46.380187 kubelet[1523]: E1002 19:28:46.380070 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 46, 376394898, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:46.460325 kubelet[1523]: E1002 19:28:46.460200 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 46, 376442996, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:46.476786 kubelet[1523]: E1002 19:28:46.476722 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.577454 kubelet[1523]: E1002 19:28:46.577387 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.678089 kubelet[1523]: E1002 19:28:46.678021 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.778838 kubelet[1523]: E1002 19:28:46.778729 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.852580 kubelet[1523]: E1002 19:28:46.852424 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:46.879735 kubelet[1523]: E1002 19:28:46.879660 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:46.980715 kubelet[1523]: E1002 19:28:46.980646 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.081428 kubelet[1523]: E1002 19:28:47.081369 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.182059 kubelet[1523]: E1002 19:28:47.181912 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.282470 kubelet[1523]: E1002 19:28:47.282410 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.383377 kubelet[1523]: E1002 19:28:47.383327 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.484333 kubelet[1523]: E1002 19:28:47.484016 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.490729 kubelet[1523]: W1002 19:28:47.490669 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:47.490729 kubelet[1523]: E1002 19:28:47.490734 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:47.530340 kubelet[1523]: W1002 19:28:47.530273 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:47.530340 kubelet[1523]: E1002 19:28:47.530315 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:47.584912 kubelet[1523]: E1002 19:28:47.584821 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.685488 kubelet[1523]: E1002 19:28:47.685436 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.786333 kubelet[1523]: E1002 19:28:47.786176 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.852983 kubelet[1523]: E1002 19:28:47.852916 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:47.875080 kubelet[1523]: E1002 19:28:47.875024 1523 controller.go:144] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:47.887238 kubelet[1523]: E1002 19:28:47.887168 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:47.979801 kubelet[1523]: I1002 19:28:47.979760 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:47.981293 kubelet[1523]: E1002 19:28:47.981250 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:47.981442 kubelet[1523]: E1002 19:28:47.981257 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 47, 979697793, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:47.982545 kubelet[1523]: E1002 19:28:47.982443 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 47, 979719492, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:47.984188 kubelet[1523]: E1002 19:28:47.984097 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 47, 979724717, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:47.988342 kubelet[1523]: E1002 19:28:47.988303 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.089056 kubelet[1523]: E1002 19:28:48.088994 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.190106 kubelet[1523]: E1002 19:28:48.190039 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.290895 kubelet[1523]: E1002 19:28:48.290835 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.338636 kubelet[1523]: W1002 19:28:48.338594 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:48.338636 kubelet[1523]: E1002 19:28:48.338638 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:48.391280 kubelet[1523]: E1002 19:28:48.391139 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.492052 kubelet[1523]: E1002 19:28:48.491988 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.592590 kubelet[1523]: E1002 19:28:48.592528 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.693297 kubelet[1523]: E1002 19:28:48.693116 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.740695 kubelet[1523]: W1002 19:28:48.740647 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:48.740695 kubelet[1523]: E1002 19:28:48.740694 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:48.794188 kubelet[1523]: E1002 19:28:48.794120 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.853139 kubelet[1523]: E1002 19:28:48.853068 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:48.894353 kubelet[1523]: E1002 19:28:48.894302 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:48.995176 kubelet[1523]: E1002 19:28:48.995024 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.095749 kubelet[1523]: E1002 19:28:49.095691 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.196147 kubelet[1523]: E1002 19:28:49.196080 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.296857 kubelet[1523]: E1002 19:28:49.296701 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.397571 kubelet[1523]: E1002 19:28:49.397515 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.498357 kubelet[1523]: E1002 19:28:49.498292 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.599071 kubelet[1523]: E1002 19:28:49.599006 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.699724 kubelet[1523]: E1002 19:28:49.699657 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.800440 kubelet[1523]: E1002 19:28:49.800369 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.854025 kubelet[1523]: E1002 19:28:49.853876 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:49.900987 kubelet[1523]: E1002 19:28:49.900906 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:49.964797 kubelet[1523]: E1002 19:28:49.964749 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:50.001536 kubelet[1523]: E1002 19:28:50.001474 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.102180 kubelet[1523]: E1002 19:28:50.102113 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.202632 kubelet[1523]: E1002 19:28:50.202479 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.303207 kubelet[1523]: E1002 19:28:50.303126 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.403981 kubelet[1523]: E1002 19:28:50.403912 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.504733 kubelet[1523]: E1002 19:28:50.504582 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.605406 kubelet[1523]: E1002 19:28:50.605327 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.706015 kubelet[1523]: E1002 19:28:50.705940 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.806573 kubelet[1523]: E1002 19:28:50.806413 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:50.854308 kubelet[1523]: E1002 19:28:50.854240 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:50.906780 kubelet[1523]: E1002 19:28:50.906720 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.007577 kubelet[1523]: E1002 19:28:51.007489 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.077535 kubelet[1523]: E1002 19:28:51.077483 1523 controller.go:144] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.128.0.55" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Oct 2 19:28:51.107722 kubelet[1523]: E1002 19:28:51.107620 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.183082 kubelet[1523]: I1002 19:28:51.182804 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:51.184651 kubelet[1523]: E1002 19:28:51.184359 1523 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.128.0.55" Oct 2 19:28:51.184651 kubelet[1523]: E1002 19:28:51.184337 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e952814df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.128.0.55 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918281439, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 51, 182750383, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e952814df" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:51.185764 kubelet[1523]: E1002 19:28:51.185637 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95284a47", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.128.0.55 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918295111, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 51, 182762337, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95284a47" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:51.186846 kubelet[1523]: E1002 19:28:51.186736 1523 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.128.0.55.178a610e95285ebf", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.128.0.55", UID:"10.128.0.55", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.128.0.55 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.128.0.55"}, FirstTimestamp:time.Date(2023, time.October, 2, 19, 28, 44, 918300351, time.Local), LastTimestamp:time.Date(2023, time.October, 2, 19, 28, 51, 182767707, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.128.0.55.178a610e95285ebf" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Oct 2 19:28:51.208052 kubelet[1523]: E1002 19:28:51.207984 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.308773 kubelet[1523]: E1002 19:28:51.308706 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.409757 kubelet[1523]: E1002 19:28:51.409603 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.510431 kubelet[1523]: E1002 19:28:51.510380 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.599566 kubelet[1523]: W1002 19:28:51.599521 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:51.599566 kubelet[1523]: E1002 19:28:51.599565 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.128.0.55" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Oct 2 19:28:51.610822 kubelet[1523]: E1002 19:28:51.610758 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.711522 kubelet[1523]: E1002 19:28:51.711376 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.812014 kubelet[1523]: E1002 19:28:51.811946 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:51.854646 kubelet[1523]: E1002 19:28:51.854579 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:51.912522 kubelet[1523]: E1002 19:28:51.912470 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.007889 kubelet[1523]: W1002 19:28:52.007740 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:52.007889 kubelet[1523]: E1002 19:28:52.007837 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Oct 2 19:28:52.012929 kubelet[1523]: E1002 19:28:52.012872 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.113421 kubelet[1523]: E1002 19:28:52.113361 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.137117 kubelet[1523]: W1002 19:28:52.137064 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:52.137328 kubelet[1523]: E1002 19:28:52.137133 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Oct 2 19:28:52.214246 kubelet[1523]: E1002 19:28:52.214179 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.315001 kubelet[1523]: E1002 19:28:52.314858 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.415694 kubelet[1523]: E1002 19:28:52.415623 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.516164 kubelet[1523]: E1002 19:28:52.516103 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.616710 kubelet[1523]: E1002 19:28:52.616632 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.717354 kubelet[1523]: E1002 19:28:52.717302 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.817930 kubelet[1523]: E1002 19:28:52.817876 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:52.855864 kubelet[1523]: E1002 19:28:52.855756 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:52.918843 kubelet[1523]: E1002 19:28:52.918691 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.019386 kubelet[1523]: E1002 19:28:53.019302 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.120205 kubelet[1523]: E1002 19:28:53.120149 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.220450 kubelet[1523]: E1002 19:28:53.220287 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.320996 kubelet[1523]: E1002 19:28:53.320924 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.421849 kubelet[1523]: E1002 19:28:53.421773 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.522673 kubelet[1523]: E1002 19:28:53.522543 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.623230 kubelet[1523]: E1002 19:28:53.623177 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.723903 kubelet[1523]: E1002 19:28:53.723842 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.824394 kubelet[1523]: E1002 19:28:53.824340 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:53.856599 kubelet[1523]: E1002 19:28:53.856534 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:53.925476 kubelet[1523]: E1002 19:28:53.925409 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.026295 kubelet[1523]: E1002 19:28:54.026207 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.127085 kubelet[1523]: E1002 19:28:54.126945 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.227437 kubelet[1523]: E1002 19:28:54.227366 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.317550 kubelet[1523]: W1002 19:28:54.317505 1523 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:134: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:54.317550 kubelet[1523]: E1002 19:28:54.317550 1523 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Oct 2 19:28:54.327672 kubelet[1523]: E1002 19:28:54.327620 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.428357 kubelet[1523]: E1002 19:28:54.428209 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.528928 kubelet[1523]: E1002 19:28:54.528865 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.629596 kubelet[1523]: E1002 19:28:54.629533 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.730333 kubelet[1523]: E1002 19:28:54.730202 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.830954 kubelet[1523]: E1002 19:28:54.830884 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.834269 kubelet[1523]: I1002 19:28:54.834207 1523 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Oct 2 19:28:54.856751 kubelet[1523]: E1002 19:28:54.856679 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:54.932159 kubelet[1523]: E1002 19:28:54.932092 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:54.965234 kubelet[1523]: E1002 19:28:54.965177 1523 eviction_manager.go:256] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.128.0.55\" not found" Oct 2 19:28:54.965975 kubelet[1523]: E1002 19:28:54.965928 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:55.032905 kubelet[1523]: E1002 19:28:55.032750 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.133493 kubelet[1523]: E1002 19:28:55.133441 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.214826 kubelet[1523]: E1002 19:28:55.214729 1523 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.55" not found Oct 2 19:28:55.234491 kubelet[1523]: E1002 19:28:55.234446 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.335455 kubelet[1523]: E1002 19:28:55.335393 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.436149 kubelet[1523]: E1002 19:28:55.436094 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.536939 kubelet[1523]: E1002 19:28:55.536832 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.637525 kubelet[1523]: E1002 19:28:55.637372 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.738043 kubelet[1523]: E1002 19:28:55.737985 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.838805 kubelet[1523]: E1002 19:28:55.838738 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:55.857203 kubelet[1523]: E1002 19:28:55.857138 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:55.939742 kubelet[1523]: E1002 19:28:55.939608 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.040469 kubelet[1523]: E1002 19:28:56.040393 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.141445 kubelet[1523]: E1002 19:28:56.141396 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.242455 kubelet[1523]: E1002 19:28:56.242325 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.273085 kubelet[1523]: E1002 19:28:56.273034 1523 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.128.0.55" not found Oct 2 19:28:56.342511 kubelet[1523]: E1002 19:28:56.342439 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.443226 kubelet[1523]: E1002 19:28:56.443176 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.543961 kubelet[1523]: E1002 19:28:56.543805 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.644715 kubelet[1523]: E1002 19:28:56.644640 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.745513 kubelet[1523]: E1002 19:28:56.745445 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.846331 kubelet[1523]: E1002 19:28:56.846261 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:56.857718 kubelet[1523]: E1002 19:28:56.857648 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:56.947336 kubelet[1523]: E1002 19:28:56.947287 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.048035 kubelet[1523]: E1002 19:28:57.047968 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.149265 kubelet[1523]: E1002 19:28:57.149119 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.250169 kubelet[1523]: E1002 19:28:57.250117 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.350499 kubelet[1523]: E1002 19:28:57.350443 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.451377 kubelet[1523]: E1002 19:28:57.451092 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.482850 kubelet[1523]: E1002 19:28:57.482774 1523 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.128.0.55\" not found" node="10.128.0.55" Oct 2 19:28:57.552179 kubelet[1523]: E1002 19:28:57.552127 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.585795 kubelet[1523]: I1002 19:28:57.585744 1523 kubelet_node_status.go:70] "Attempting to register node" node="10.128.0.55" Oct 2 19:28:57.652941 kubelet[1523]: E1002 19:28:57.652878 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.675098 kubelet[1523]: I1002 19:28:57.675033 1523 kubelet_node_status.go:73] "Successfully registered node" node="10.128.0.55" Oct 2 19:28:57.753447 kubelet[1523]: E1002 19:28:57.753286 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.791114 sudo[1343]: pam_unix(sudo:session): session closed for user root Oct 2 19:28:57.790000 audit[1343]: USER_END pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.796831 kernel: kauditd_printk_skb: 541 callbacks suppressed Oct 2 19:28:57.796964 kernel: audit: type=1106 audit(1696274937.790:636): pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.790000 audit[1343]: CRED_DISP pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.836287 sshd[1340]: pam_unix(sshd:session): session closed for user core Oct 2 19:28:57.841911 systemd[1]: sshd@6-10.128.0.55:22-147.75.109.163:49646.service: Deactivated successfully. Oct 2 19:28:57.843054 systemd[1]: session-7.scope: Deactivated successfully. Oct 2 19:28:57.844954 kernel: audit: type=1104 audit(1696274937.790:637): pid=1343 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.837000 audit[1340]: USER_END pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:57.845267 systemd-logind[1121]: Session 7 logged out. Waiting for processes to exit. Oct 2 19:28:57.847228 systemd-logind[1121]: Removed session 7. Oct 2 19:28:57.854368 kubelet[1523]: E1002 19:28:57.854322 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:57.857861 kubelet[1523]: E1002 19:28:57.857837 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:57.877374 kernel: audit: type=1106 audit(1696274937.837:638): pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_mkhomedir,pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:57.877552 kernel: audit: type=1104 audit(1696274937.837:639): pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:57.837000 audit[1340]: CRED_DISP pid=1340 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_oslogin_login acct="core" exe="/usr/sbin/sshd" hostname=147.75.109.163 addr=147.75.109.163 terminal=ssh res=success' Oct 2 19:28:57.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.55:22-147.75.109.163:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.926615 kernel: audit: type=1131 audit(1696274937.841:640): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-10.128.0.55:22-147.75.109.163:49646 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:57.955335 kubelet[1523]: E1002 19:28:57.955280 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.056460 kubelet[1523]: E1002 19:28:58.056284 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.156839 kubelet[1523]: E1002 19:28:58.156773 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.257697 kubelet[1523]: E1002 19:28:58.257644 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.358201 kubelet[1523]: E1002 19:28:58.358098 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.458783 kubelet[1523]: E1002 19:28:58.458599 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.559280 kubelet[1523]: E1002 19:28:58.559232 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.660151 kubelet[1523]: E1002 19:28:58.660012 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.760722 kubelet[1523]: E1002 19:28:58.760665 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.858897 kubelet[1523]: E1002 19:28:58.858839 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:58.861133 kubelet[1523]: E1002 19:28:58.861089 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:58.961885 kubelet[1523]: E1002 19:28:58.961720 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.062788 kubelet[1523]: E1002 19:28:59.062720 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.143439 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Oct 2 19:28:59.166866 kernel: audit: type=1131 audit(1696274939.142:641): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:59.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hostnamed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Oct 2 19:28:59.167105 kubelet[1523]: E1002 19:28:59.167048 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.187000 audit: BPF prog-id=71 op=UNLOAD Oct 2 19:28:59.187000 audit: BPF prog-id=70 op=UNLOAD Oct 2 19:28:59.202537 kernel: audit: type=1334 audit(1696274939.187:642): prog-id=71 op=UNLOAD Oct 2 19:28:59.202637 kernel: audit: type=1334 audit(1696274939.187:643): prog-id=70 op=UNLOAD Oct 2 19:28:59.202671 kernel: audit: type=1334 audit(1696274939.187:644): prog-id=69 op=UNLOAD Oct 2 19:28:59.187000 audit: BPF prog-id=69 op=UNLOAD Oct 2 19:28:59.267736 kubelet[1523]: E1002 19:28:59.267588 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.368406 kubelet[1523]: E1002 19:28:59.368344 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.468917 kubelet[1523]: E1002 19:28:59.468864 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.569561 kubelet[1523]: E1002 19:28:59.569402 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.670043 kubelet[1523]: E1002 19:28:59.669981 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.770765 kubelet[1523]: E1002 19:28:59.770713 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.859534 kubelet[1523]: E1002 19:28:59.859462 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:28:59.870860 kubelet[1523]: E1002 19:28:59.870786 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:28:59.966849 kubelet[1523]: E1002 19:28:59.966799 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:28:59.971154 kubelet[1523]: E1002 19:28:59.971086 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.072024 kubelet[1523]: E1002 19:29:00.071961 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.172241 kubelet[1523]: E1002 19:29:00.172097 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.272844 kubelet[1523]: E1002 19:29:00.272758 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.373662 kubelet[1523]: E1002 19:29:00.373605 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.474330 kubelet[1523]: E1002 19:29:00.474186 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.574888 kubelet[1523]: E1002 19:29:00.574828 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.675392 kubelet[1523]: E1002 19:29:00.675337 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.776294 kubelet[1523]: E1002 19:29:00.776151 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.860004 kubelet[1523]: E1002 19:29:00.859932 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:00.876292 kubelet[1523]: E1002 19:29:00.876235 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:00.977013 kubelet[1523]: E1002 19:29:00.976943 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.077559 kubelet[1523]: E1002 19:29:01.077488 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.177667 kubelet[1523]: E1002 19:29:01.177605 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.277928 kubelet[1523]: E1002 19:29:01.277880 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.378955 kubelet[1523]: E1002 19:29:01.378794 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.479422 kubelet[1523]: E1002 19:29:01.479377 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.580091 kubelet[1523]: E1002 19:29:01.580021 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.680857 kubelet[1523]: E1002 19:29:01.680694 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.781336 kubelet[1523]: E1002 19:29:01.781282 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.861074 kubelet[1523]: E1002 19:29:01.861005 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:01.881649 kubelet[1523]: E1002 19:29:01.881578 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:01.982558 kubelet[1523]: E1002 19:29:01.982418 1523 kubelet.go:2448] "Error getting node" err="node \"10.128.0.55\" not found" Oct 2 19:29:02.083019 kubelet[1523]: I1002 19:29:02.082967 1523 kuberuntime_manager.go:1050] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Oct 2 19:29:02.083618 env[1136]: time="2023-10-02T19:29:02.083554929Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Oct 2 19:29:02.084211 kubelet[1523]: I1002 19:29:02.083847 1523 kubelet_network.go:60] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Oct 2 19:29:02.084463 kubelet[1523]: E1002 19:29:02.084327 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:02.860859 kubelet[1523]: I1002 19:29:02.860793 1523 apiserver.go:52] "Watching apiserver" Oct 2 19:29:02.861164 kubelet[1523]: E1002 19:29:02.861136 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:02.864005 kubelet[1523]: I1002 19:29:02.863961 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:29:02.864125 kubelet[1523]: I1002 19:29:02.864061 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:29:02.873080 systemd[1]: Created slice kubepods-besteffort-pod2cb12e3e_9253_4b05_9ea0_9a4e7025cfa0.slice. Oct 2 19:29:02.887873 kubelet[1523]: I1002 19:29:02.887832 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f68jv\" (UniqueName: \"kubernetes.io/projected/2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0-kube-api-access-f68jv\") pod \"kube-proxy-ngtls\" (UID: \"2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0\") " pod="kube-system/kube-proxy-ngtls" Oct 2 19:29:02.888177 kubelet[1523]: I1002 19:29:02.888134 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-hostproc\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888291 kubelet[1523]: I1002 19:29:02.888209 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-lib-modules\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888291 kubelet[1523]: I1002 19:29:02.888268 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrllg\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-kube-api-access-lrllg\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888449 kubelet[1523]: I1002 19:29:02.888315 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0-kube-proxy\") pod \"kube-proxy-ngtls\" (UID: \"2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0\") " pod="kube-system/kube-proxy-ngtls" Oct 2 19:29:02.888449 kubelet[1523]: I1002 19:29:02.888352 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0-xtables-lock\") pod \"kube-proxy-ngtls\" (UID: \"2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0\") " pod="kube-system/kube-proxy-ngtls" Oct 2 19:29:02.888449 kubelet[1523]: I1002 19:29:02.888408 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-bpf-maps\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888449 kubelet[1523]: I1002 19:29:02.888444 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-hubble-tls\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888662 kubelet[1523]: I1002 19:29:02.888479 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-cgroup\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888662 kubelet[1523]: I1002 19:29:02.888524 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cni-path\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888662 kubelet[1523]: I1002 19:29:02.888560 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-net\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888662 kubelet[1523]: I1002 19:29:02.888597 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da7fc87-301b-4122-9373-183d45cbc169-clustermesh-secrets\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888662 kubelet[1523]: I1002 19:29:02.888633 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da7fc87-301b-4122-9373-183d45cbc169-cilium-config-path\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888668 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-kernel\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888703 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0-lib-modules\") pod \"kube-proxy-ngtls\" (UID: \"2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0\") " pod="kube-system/kube-proxy-ngtls" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888768 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-run\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888805 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-etc-cni-netd\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888874 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-xtables-lock\") pod \"cilium-5rfgw\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " pod="kube-system/cilium-5rfgw" Oct 2 19:29:02.888979 kubelet[1523]: I1002 19:29:02.888894 1523 reconciler.go:169] "Reconciler: start to sync state" Oct 2 19:29:02.894579 systemd[1]: Created slice kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice. Oct 2 19:29:03.194030 env[1136]: time="2023-10-02T19:29:03.193851768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngtls,Uid:2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0,Namespace:kube-system,Attempt:0,}" Oct 2 19:29:03.502511 env[1136]: time="2023-10-02T19:29:03.502277610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rfgw,Uid:3da7fc87-301b-4122-9373-183d45cbc169,Namespace:kube-system,Attempt:0,}" Oct 2 19:29:03.771044 env[1136]: time="2023-10-02T19:29:03.770605931Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.775642 env[1136]: time="2023-10-02T19:29:03.775574439Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.777609 env[1136]: time="2023-10-02T19:29:03.777565369Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.779907 env[1136]: time="2023-10-02T19:29:03.779865770Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.781570 env[1136]: time="2023-10-02T19:29:03.781519622Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.782612 env[1136]: time="2023-10-02T19:29:03.782575031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.784999 env[1136]: time="2023-10-02T19:29:03.784949338Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.785876 env[1136]: time="2023-10-02T19:29:03.785839699Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:03.817486 env[1136]: time="2023-10-02T19:29:03.815163942Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:29:03.817486 env[1136]: time="2023-10-02T19:29:03.815205784Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:29:03.817486 env[1136]: time="2023-10-02T19:29:03.815224434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:29:03.817486 env[1136]: time="2023-10-02T19:29:03.815421213Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/115274754ff91fe3499128abd5cf0c609d042fb9825b521882e961c01574a1e9 pid=1627 runtime=io.containerd.runc.v2 Oct 2 19:29:03.817856 env[1136]: time="2023-10-02T19:29:03.813937225Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:29:03.817856 env[1136]: time="2023-10-02T19:29:03.813992073Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:29:03.817856 env[1136]: time="2023-10-02T19:29:03.814011227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:29:03.817856 env[1136]: time="2023-10-02T19:29:03.814220650Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097 pid=1624 runtime=io.containerd.runc.v2 Oct 2 19:29:03.839638 systemd[1]: Started cri-containerd-4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097.scope. Oct 2 19:29:03.862088 systemd[1]: Started cri-containerd-115274754ff91fe3499128abd5cf0c609d042fb9825b521882e961c01574a1e9.scope. Oct 2 19:29:03.866901 kubelet[1523]: E1002 19:29:03.866043 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892869 kernel: audit: type=1400 audit(1696274943.870:645): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.923962 kernel: audit: type=1400 audit(1696274943.870:646): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.951044 env[1136]: time="2023-10-02T19:29:03.940215940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-5rfgw,Uid:3da7fc87-301b-4122-9373-183d45cbc169,Namespace:kube-system,Attempt:0,} returns sandbox id \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\"" Oct 2 19:29:03.952915 kernel: audit: type=1400 audit(1696274943.870:647): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.953626 kubelet[1523]: E1002 19:29:03.953338 1523 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 19:29:03.954276 env[1136]: time="2023-10-02T19:29:03.954231301Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\"" Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.003045 kernel: audit: type=1400 audit(1696274943.870:648): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.003205 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:29:04.003242 kernel: audit: type=1400 audit(1696274943.870:649): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.003275 kernel: audit: audit_lost=1 audit_rate_limit=0 audit_backlog_limit=64 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.014148 kernel: audit: backlog limit exceeded Oct 2 19:29:04.035112 kernel: audit: type=1400 audit(1696274943.870:650): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.036776 kernel: audit: audit_backlog=65 > audit_backlog_limit=64 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.870000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.891000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.891000 audit: BPF prog-id=73 op=LOAD Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000145c48 a2=10 a3=1c items=0 ppid=1624 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:03.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333034616430653435663661313962613036343836633362356164 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001456b0 a2=3c a3=c items=0 ppid=1624 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:03.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333034616430653435663661313962613036343836633362356164 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit: BPF prog-id=74 op=LOAD Oct 2 19:29:03.892000 audit[1644]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001459d8 a2=78 a3=c000380bb0 items=0 ppid=1624 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:03.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333034616430653435663661313962613036343836633362356164 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit: BPF prog-id=75 op=LOAD Oct 2 19:29:03.892000 audit[1644]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000145770 a2=78 a3=c000380bf8 items=0 ppid=1624 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:03.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333034616430653435663661313962613036343836633362356164 Oct 2 19:29:03.892000 audit: BPF prog-id=75 op=UNLOAD Oct 2 19:29:03.892000 audit: BPF prog-id=74 op=UNLOAD Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { perfmon } for pid=1644 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit[1644]: AVC avc: denied { bpf } for pid=1644 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.892000 audit: BPF prog-id=76 op=LOAD Oct 2 19:29:03.892000 audit[1644]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000145c30 a2=78 a3=c000381008 items=0 ppid=1624 pid=1644 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:03.892000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3464333034616430653435663661313962613036343836633362356164 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:03.967000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.013000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.034000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.034000 audit: BPF prog-id=77 op=LOAD Oct 2 19:29:04.048714 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount594232941.mount: Deactivated successfully. Oct 2 19:29:04.050000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.050000 audit[1645]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001c56b0 a2=3c a3=c items=0 ppid=1627 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:04.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353237343735346666393166653334393931323861626435636630 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.051000 audit: BPF prog-id=78 op=LOAD Oct 2 19:29:04.051000 audit[1645]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c59d8 a2=78 a3=c00025b6f0 items=0 ppid=1627 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:04.051000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353237343735346666393166653334393931323861626435636630 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.052000 audit: BPF prog-id=79 op=LOAD Oct 2 19:29:04.052000 audit[1645]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001c5770 a2=78 a3=c00025b738 items=0 ppid=1627 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:04.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353237343735346666393166653334393931323861626435636630 Oct 2 19:29:04.053000 audit: BPF prog-id=79 op=UNLOAD Oct 2 19:29:04.053000 audit: BPF prog-id=78 op=UNLOAD Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { perfmon } for pid=1645 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit[1645]: AVC avc: denied { bpf } for pid=1645 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:04.053000 audit: BPF prog-id=80 op=LOAD Oct 2 19:29:04.053000 audit[1645]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001c5c30 a2=78 a3=c00025bb48 items=0 ppid=1627 pid=1645 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:04.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3131353237343735346666393166653334393931323861626435636630 Oct 2 19:29:04.072344 env[1136]: time="2023-10-02T19:29:04.072298855Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ngtls,Uid:2cb12e3e-9253-4b05-9ea0-9a4e7025cfa0,Namespace:kube-system,Attempt:0,} returns sandbox id \"115274754ff91fe3499128abd5cf0c609d042fb9825b521882e961c01574a1e9\"" Oct 2 19:29:04.849714 kubelet[1523]: E1002 19:29:04.849674 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.866864 kubelet[1523]: E1002 19:29:04.866781 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:04.979393 kubelet[1523]: E1002 19:29:04.979344 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:05.867312 kubelet[1523]: E1002 19:29:05.867243 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:06.867504 kubelet[1523]: E1002 19:29:06.867450 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:07.867778 kubelet[1523]: E1002 19:29:07.867706 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:08.868582 kubelet[1523]: E1002 19:29:08.868527 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.208342 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3421675860.mount: Deactivated successfully. Oct 2 19:29:09.869406 kubelet[1523]: E1002 19:29:09.869315 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:09.980579 kubelet[1523]: E1002 19:29:09.980537 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:10.870153 kubelet[1523]: E1002 19:29:10.870063 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:11.871108 kubelet[1523]: E1002 19:29:11.871019 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:12.459080 env[1136]: time="2023-10-02T19:29:12.459005453Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:12.462211 env[1136]: time="2023-10-02T19:29:12.462154143Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:12.474257 env[1136]: time="2023-10-02T19:29:12.474186035Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b\" returns image reference \"sha256:526bd4754c9cd45a9602873f814648239ebf8405ea2b401f5e7a3546f7310d88\"" Oct 2 19:29:12.475786 env[1136]: time="2023-10-02T19:29:12.474685093Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:12.476419 env[1136]: time="2023-10-02T19:29:12.476362745Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\"" Oct 2 19:29:12.477973 env[1136]: time="2023-10-02T19:29:12.477928636Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:29:12.493221 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3616537592.mount: Deactivated successfully. Oct 2 19:29:12.505679 env[1136]: time="2023-10-02T19:29:12.505600324Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" Oct 2 19:29:12.507019 env[1136]: time="2023-10-02T19:29:12.506948326Z" level=info msg="StartContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" Oct 2 19:29:12.544098 systemd[1]: Started cri-containerd-5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b.scope. Oct 2 19:29:12.562174 systemd[1]: cri-containerd-5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b.scope: Deactivated successfully. Oct 2 19:29:12.871617 kubelet[1523]: E1002 19:29:12.871551 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:13.489791 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b-rootfs.mount: Deactivated successfully. Oct 2 19:29:13.792828 update_engine[1124]: I1002 19:29:13.792625 1124 update_attempter.cc:505] Updating boot flags... Oct 2 19:29:13.872584 kubelet[1523]: E1002 19:29:13.872490 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.383201 env[1136]: time="2023-10-02T19:29:14.383127862Z" level=info msg="shim disconnected" id=5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b Oct 2 19:29:14.383739 env[1136]: time="2023-10-02T19:29:14.383204309Z" level=warning msg="cleaning up after shim disconnected" id=5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b namespace=k8s.io Oct 2 19:29:14.383739 env[1136]: time="2023-10-02T19:29:14.383219056Z" level=info msg="cleaning up dead shim" Oct 2 19:29:14.433097 env[1136]: time="2023-10-02T19:29:14.433021088Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:29:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1738 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:14Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:14Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:29:14.433845 env[1136]: time="2023-10-02T19:29:14.433678983Z" level=error msg="copy shim log" error="read /proc/self/fd/43: file already closed" Oct 2 19:29:14.434397 env[1136]: time="2023-10-02T19:29:14.434340460Z" level=error msg="Failed to pipe stdout of container \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" error="reading from a closed fifo" Oct 2 19:29:14.434577 env[1136]: time="2023-10-02T19:29:14.434412035Z" level=error msg="Failed to pipe stderr of container \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" error="reading from a closed fifo" Oct 2 19:29:14.437709 env[1136]: time="2023-10-02T19:29:14.437638488Z" level=error msg="StartContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:29:14.439056 kubelet[1523]: E1002 19:29:14.439009 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b" Oct 2 19:29:14.439241 kubelet[1523]: E1002 19:29:14.439219 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:29:14.439241 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:29:14.439241 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:29:14.439241 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:29:14.439649 kubelet[1523]: E1002 19:29:14.439310 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:14.872835 kubelet[1523]: E1002 19:29:14.872763 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:14.981296 kubelet[1523]: E1002 19:29:14.981260 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:15.078943 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1629673184.mount: Deactivated successfully. Oct 2 19:29:15.214059 env[1136]: time="2023-10-02T19:29:15.213907116Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:29:15.240652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3498881569.mount: Deactivated successfully. Oct 2 19:29:15.259116 env[1136]: time="2023-10-02T19:29:15.259029711Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" Oct 2 19:29:15.260326 env[1136]: time="2023-10-02T19:29:15.260267928Z" level=info msg="StartContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" Oct 2 19:29:15.309184 systemd[1]: Started cri-containerd-18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586.scope. Oct 2 19:29:15.332190 systemd[1]: cri-containerd-18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586.scope: Deactivated successfully. Oct 2 19:29:15.445188 env[1136]: time="2023-10-02T19:29:15.445119210Z" level=info msg="shim disconnected" id=18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586 Oct 2 19:29:15.446006 env[1136]: time="2023-10-02T19:29:15.445963211Z" level=warning msg="cleaning up after shim disconnected" id=18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586 namespace=k8s.io Oct 2 19:29:15.446153 env[1136]: time="2023-10-02T19:29:15.446129891Z" level=info msg="cleaning up dead shim" Oct 2 19:29:15.474072 env[1136]: time="2023-10-02T19:29:15.473924586Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:29:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1779 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:29:15.474653 env[1136]: time="2023-10-02T19:29:15.474574584Z" level=error msg="copy shim log" error="read /proc/self/fd/45: file already closed" Oct 2 19:29:15.475104 env[1136]: time="2023-10-02T19:29:15.475056409Z" level=error msg="Failed to pipe stdout of container \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" error="reading from a closed fifo" Oct 2 19:29:15.475589 env[1136]: time="2023-10-02T19:29:15.475254902Z" level=error msg="Failed to pipe stderr of container \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" error="reading from a closed fifo" Oct 2 19:29:15.477971 env[1136]: time="2023-10-02T19:29:15.477921171Z" level=error msg="StartContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:29:15.478363 kubelet[1523]: E1002 19:29:15.478325 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586" Oct 2 19:29:15.478939 kubelet[1523]: E1002 19:29:15.478909 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:29:15.478939 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:29:15.478939 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:29:15.478939 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:29:15.479378 kubelet[1523]: E1002 19:29:15.478977 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:15.628927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount273590768.mount: Deactivated successfully. Oct 2 19:29:15.746060 env[1136]: time="2023-10-02T19:29:15.745246443Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:15.748744 env[1136]: time="2023-10-02T19:29:15.748691646Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:15.751237 env[1136]: time="2023-10-02T19:29:15.751185704Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.25.14,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:15.753297 env[1136]: time="2023-10-02T19:29:15.753257053Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:4a23f328943342be6a3eeda75cc7a01d175bcf8b096611c97d2aa14c843cf326,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:29:15.753920 env[1136]: time="2023-10-02T19:29:15.753879538Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.25.14\" returns image reference \"sha256:b2d7e01cd611a8a377680226224d6d7f70eea58e8e603b1874585a279866f6a2\"" Oct 2 19:29:15.756758 env[1136]: time="2023-10-02T19:29:15.756721757Z" level=info msg="CreateContainer within sandbox \"115274754ff91fe3499128abd5cf0c609d042fb9825b521882e961c01574a1e9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Oct 2 19:29:15.774727 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4054303030.mount: Deactivated successfully. Oct 2 19:29:15.787337 env[1136]: time="2023-10-02T19:29:15.787264870Z" level=info msg="CreateContainer within sandbox \"115274754ff91fe3499128abd5cf0c609d042fb9825b521882e961c01574a1e9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b9e8d60bcc91db25c669abb6e4de175060f079153a47981e09d35c51956c52f7\"" Oct 2 19:29:15.788107 env[1136]: time="2023-10-02T19:29:15.788058202Z" level=info msg="StartContainer for \"b9e8d60bcc91db25c669abb6e4de175060f079153a47981e09d35c51956c52f7\"" Oct 2 19:29:15.816712 systemd[1]: Started cri-containerd-b9e8d60bcc91db25c669abb6e4de175060f079153a47981e09d35c51956c52f7.scope. Oct 2 19:29:15.869977 kernel: kauditd_printk_skb: 106 callbacks suppressed Oct 2 19:29:15.870172 kernel: audit: type=1400 audit(1696274955.841:679): avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=8 items=0 ppid=1627 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:15.903170 kernel: audit: type=1300 audit(1696274955.841:679): arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001176b0 a2=3c a3=8 items=0 ppid=1627 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:15.903581 kubelet[1523]: E1002 19:29:15.903389 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:15.904093 kernel: audit: type=1327 audit(1696274955.841:679): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239653864363062636339316462323563363639616262366534646531 Oct 2 19:29:15.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239653864363062636339316462323563363639616262366534646531 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.953916 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.954081 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.998847 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:16.003416 env[1136]: time="2023-10-02T19:29:16.003357042Z" level=info msg="StartContainer for \"b9e8d60bcc91db25c669abb6e4de175060f079153a47981e09d35c51956c52f7\" returns successfully" Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:16.043436 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:16.043605 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:16.047883 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:16.085669 kernel: audit: type=1400 audit(1696274955.841:680): avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.841000 audit: BPF prog-id=81 op=LOAD Oct 2 19:29:15.841000 audit[1800]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001179d8 a2=78 a3=c0002a0850 items=0 ppid=1627 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:15.841000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239653864363062636339316462323563363639616262366534646531 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit: BPF prog-id=82 op=LOAD Oct 2 19:29:15.868000 audit[1800]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000117770 a2=78 a3=c0002a0898 items=0 ppid=1627 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:15.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239653864363062636339316462323563363639616262366534646531 Oct 2 19:29:15.868000 audit: BPF prog-id=82 op=UNLOAD Oct 2 19:29:15.868000 audit: BPF prog-id=81 op=UNLOAD Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { perfmon } for pid=1800 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit[1800]: AVC avc: denied { bpf } for pid=1800 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:29:15.868000 audit: BPF prog-id=83 op=LOAD Oct 2 19:29:15.868000 audit[1800]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000117c30 a2=78 a3=c0002a0928 items=0 ppid=1627 pid=1800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:15.868000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239653864363062636339316462323563363639616262366534646531 Oct 2 19:29:16.123028 kernel: IPVS: Registered protocols (TCP, UDP, SCTP, AH, ESP) Oct 2 19:29:16.123291 kernel: IPVS: Connection hash table configured (size=4096, memory=32Kbytes) Oct 2 19:29:16.123344 kernel: IPVS: ipvs loaded. Oct 2 19:29:16.140872 kernel: IPVS: [rr] scheduler registered. Oct 2 19:29:16.154853 kernel: IPVS: [wrr] scheduler registered. Oct 2 19:29:16.168159 kernel: IPVS: [sh] scheduler registered. Oct 2 19:29:16.213125 kubelet[1523]: I1002 19:29:16.213089 1523 scope.go:115] "RemoveContainer" containerID="5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b" Oct 2 19:29:16.213712 kubelet[1523]: I1002 19:29:16.213677 1523 scope.go:115] "RemoveContainer" containerID="5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b" Oct 2 19:29:16.216579 env[1136]: time="2023-10-02T19:29:16.216520812Z" level=info msg="RemoveContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" Oct 2 19:29:16.218512 env[1136]: time="2023-10-02T19:29:16.218458326Z" level=info msg="RemoveContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\"" Oct 2 19:29:16.218673 env[1136]: time="2023-10-02T19:29:16.218596197Z" level=error msg="RemoveContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\" failed" error="failed to set removing state for container \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\": container is already in removing state" Oct 2 19:29:16.219077 kubelet[1523]: E1002 19:29:16.219052 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\": container is already in removing state" containerID="5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b" Oct 2 19:29:16.219216 kubelet[1523]: E1002 19:29:16.219111 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b": container is already in removing state; Skipping pod "cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)" Oct 2 19:29:16.219538 kubelet[1523]: E1002 19:29:16.219512 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:16.223599 env[1136]: time="2023-10-02T19:29:16.223549252Z" level=info msg="RemoveContainer for \"5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b\" returns successfully" Oct 2 19:29:16.229000 audit[1858]: NETFILTER_CFG table=mangle:35 family=2 entries=1 op=nft_register_chain pid=1858 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.229000 audit[1858]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff55115630 a2=0 a3=7fff5511561c items=0 ppid=1810 pid=1858 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.229000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:29:16.232000 audit[1859]: NETFILTER_CFG table=nat:36 family=2 entries=1 op=nft_register_chain pid=1859 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.232000 audit[1859]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffd95e0b630 a2=0 a3=7ffd95e0b61c items=0 ppid=1810 pid=1859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.232000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:29:16.234000 audit[1862]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_chain pid=1862 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.234000 audit[1862]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7fff370dedd0 a2=0 a3=7fff370dedbc items=0 ppid=1810 pid=1862 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.234000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:29:16.236000 audit[1860]: NETFILTER_CFG table=mangle:38 family=10 entries=1 op=nft_register_chain pid=1860 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.236000 audit[1860]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd77daabb0 a2=0 a3=7ffd77daab9c items=0 ppid=1810 pid=1860 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Oct 2 19:29:16.239000 audit[1863]: NETFILTER_CFG table=nat:39 family=10 entries=1 op=nft_register_chain pid=1863 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.239000 audit[1863]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc8d1c63c0 a2=0 a3=7ffc8d1c63ac items=0 ppid=1810 pid=1863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.239000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Oct 2 19:29:16.242000 audit[1864]: NETFILTER_CFG table=filter:40 family=10 entries=1 op=nft_register_chain pid=1864 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.242000 audit[1864]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffed15889b0 a2=0 a3=7ffed158899c items=0 ppid=1810 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.242000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Oct 2 19:29:16.334000 audit[1865]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_chain pid=1865 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.334000 audit[1865]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fff3684c930 a2=0 a3=7fff3684c91c items=0 ppid=1810 pid=1865 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:29:16.338000 audit[1867]: NETFILTER_CFG table=filter:42 family=2 entries=1 op=nft_register_rule pid=1867 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.338000 audit[1867]: SYSCALL arch=c000003e syscall=46 success=yes exit=752 a0=3 a1=7ffee529c1c0 a2=0 a3=7ffee529c1ac items=0 ppid=1810 pid=1867 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.338000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Oct 2 19:29:16.343000 audit[1870]: NETFILTER_CFG table=filter:43 family=2 entries=2 op=nft_register_chain pid=1870 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.343000 audit[1870]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd67887c70 a2=0 a3=7ffd67887c5c items=0 ppid=1810 pid=1870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.343000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Oct 2 19:29:16.344000 audit[1871]: NETFILTER_CFG table=filter:44 family=2 entries=1 op=nft_register_chain pid=1871 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.344000 audit[1871]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe10403490 a2=0 a3=7ffe1040347c items=0 ppid=1810 pid=1871 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.344000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:29:16.348000 audit[1873]: NETFILTER_CFG table=filter:45 family=2 entries=1 op=nft_register_rule pid=1873 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.348000 audit[1873]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffd5669ae50 a2=0 a3=7ffd5669ae3c items=0 ppid=1810 pid=1873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.348000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:29:16.349000 audit[1874]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_chain pid=1874 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.349000 audit[1874]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffee93b2bd0 a2=0 a3=7ffee93b2bbc items=0 ppid=1810 pid=1874 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:29:16.353000 audit[1876]: NETFILTER_CFG table=filter:47 family=2 entries=1 op=nft_register_rule pid=1876 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.353000 audit[1876]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff94504b20 a2=0 a3=7fff94504b0c items=0 ppid=1810 pid=1876 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.353000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:29:16.359000 audit[1879]: NETFILTER_CFG table=filter:48 family=2 entries=1 op=nft_register_rule pid=1879 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.359000 audit[1879]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7fff1d9547a0 a2=0 a3=7fff1d95478c items=0 ppid=1810 pid=1879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.359000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Oct 2 19:29:16.361000 audit[1880]: NETFILTER_CFG table=filter:49 family=2 entries=1 op=nft_register_chain pid=1880 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.361000 audit[1880]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffe608c1c60 a2=0 a3=7ffe608c1c4c items=0 ppid=1810 pid=1880 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.361000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:29:16.364000 audit[1882]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_rule pid=1882 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.364000 audit[1882]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff050e2520 a2=0 a3=7fff050e250c items=0 ppid=1810 pid=1882 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.364000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:29:16.366000 audit[1883]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=1883 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.366000 audit[1883]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffdf11229e0 a2=0 a3=7ffdf11229cc items=0 ppid=1810 pid=1883 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.366000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:29:16.370000 audit[1885]: NETFILTER_CFG table=filter:52 family=2 entries=1 op=nft_register_rule pid=1885 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.370000 audit[1885]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7fffad62fab0 a2=0 a3=7fffad62fa9c items=0 ppid=1810 pid=1885 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.370000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:29:16.375000 audit[1888]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_rule pid=1888 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.375000 audit[1888]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe9ccc8f10 a2=0 a3=7ffe9ccc8efc items=0 ppid=1810 pid=1888 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.375000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:29:16.383000 audit[1891]: NETFILTER_CFG table=filter:54 family=2 entries=1 op=nft_register_rule pid=1891 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.383000 audit[1891]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd79768a90 a2=0 a3=7ffd79768a7c items=0 ppid=1810 pid=1891 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.383000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:29:16.384000 audit[1892]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=1892 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.384000 audit[1892]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffd485249a0 a2=0 a3=7ffd4852498c items=0 ppid=1810 pid=1892 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.384000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:29:16.388000 audit[1894]: NETFILTER_CFG table=nat:56 family=2 entries=2 op=nft_register_chain pid=1894 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.388000 audit[1894]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7ffe91eefda0 a2=0 a3=7ffe91eefd8c items=0 ppid=1810 pid=1894 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.388000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:29:16.392000 audit[1897]: NETFILTER_CFG table=nat:57 family=2 entries=2 op=nft_register_chain pid=1897 subj=system_u:system_r:kernel_t:s0 comm="iptables" Oct 2 19:29:16.392000 audit[1897]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7fff47ea1340 a2=0 a3=7fff47ea132c items=0 ppid=1810 pid=1897 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.392000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:29:16.408000 audit[1901]: NETFILTER_CFG table=filter:58 family=2 entries=6 op=nft_register_rule pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:29:16.408000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=4028 a0=3 a1=7fff82bf2df0 a2=0 a3=7fff82bf2ddc items=0 ppid=1810 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.408000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:29:16.419000 audit[1901]: NETFILTER_CFG table=nat:59 family=2 entries=17 op=nft_register_chain pid=1901 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Oct 2 19:29:16.419000 audit[1901]: SYSCALL arch=c000003e syscall=46 success=yes exit=5340 a0=3 a1=7fff82bf2df0 a2=0 a3=7fff82bf2ddc items=0 ppid=1810 pid=1901 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.419000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:29:16.429000 audit[1905]: NETFILTER_CFG table=filter:60 family=10 entries=1 op=nft_register_chain pid=1905 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.429000 audit[1905]: SYSCALL arch=c000003e syscall=46 success=yes exit=108 a0=3 a1=7fffb5f651c0 a2=0 a3=7fffb5f651ac items=0 ppid=1810 pid=1905 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.429000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Oct 2 19:29:16.433000 audit[1907]: NETFILTER_CFG table=filter:61 family=10 entries=2 op=nft_register_chain pid=1907 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.433000 audit[1907]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7fff68145f00 a2=0 a3=7fff68145eec items=0 ppid=1810 pid=1907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.433000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Oct 2 19:29:16.438000 audit[1910]: NETFILTER_CFG table=filter:62 family=10 entries=2 op=nft_register_chain pid=1910 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.438000 audit[1910]: SYSCALL arch=c000003e syscall=46 success=yes exit=836 a0=3 a1=7ffd1111f880 a2=0 a3=7ffd1111f86c items=0 ppid=1810 pid=1910 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.438000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Oct 2 19:29:16.439000 audit[1911]: NETFILTER_CFG table=filter:63 family=10 entries=1 op=nft_register_chain pid=1911 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.439000 audit[1911]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7fffba479520 a2=0 a3=7fffba47950c items=0 ppid=1810 pid=1911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.439000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Oct 2 19:29:16.443000 audit[1913]: NETFILTER_CFG table=filter:64 family=10 entries=1 op=nft_register_rule pid=1913 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.443000 audit[1913]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7fff8e9d9160 a2=0 a3=7fff8e9d914c items=0 ppid=1810 pid=1913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.443000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Oct 2 19:29:16.445000 audit[1914]: NETFILTER_CFG table=filter:65 family=10 entries=1 op=nft_register_chain pid=1914 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.445000 audit[1914]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffdc7ada8c0 a2=0 a3=7ffdc7ada8ac items=0 ppid=1810 pid=1914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.445000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Oct 2 19:29:16.448000 audit[1916]: NETFILTER_CFG table=filter:66 family=10 entries=1 op=nft_register_rule pid=1916 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.448000 audit[1916]: SYSCALL arch=c000003e syscall=46 success=yes exit=744 a0=3 a1=7ffe9bd2b8b0 a2=0 a3=7ffe9bd2b89c items=0 ppid=1810 pid=1916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.448000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Oct 2 19:29:16.453000 audit[1919]: NETFILTER_CFG table=filter:67 family=10 entries=2 op=nft_register_chain pid=1919 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.453000 audit[1919]: SYSCALL arch=c000003e syscall=46 success=yes exit=828 a0=3 a1=7ffee9f4f030 a2=0 a3=7ffee9f4f01c items=0 ppid=1810 pid=1919 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.453000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Oct 2 19:29:16.455000 audit[1920]: NETFILTER_CFG table=filter:68 family=10 entries=1 op=nft_register_chain pid=1920 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.455000 audit[1920]: SYSCALL arch=c000003e syscall=46 success=yes exit=100 a0=3 a1=7ffc9083f270 a2=0 a3=7ffc9083f25c items=0 ppid=1810 pid=1920 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.455000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Oct 2 19:29:16.460000 audit[1922]: NETFILTER_CFG table=filter:69 family=10 entries=1 op=nft_register_rule pid=1922 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.460000 audit[1922]: SYSCALL arch=c000003e syscall=46 success=yes exit=528 a0=3 a1=7ffcdcdad010 a2=0 a3=7ffcdcdacffc items=0 ppid=1810 pid=1922 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.460000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Oct 2 19:29:16.462000 audit[1923]: NETFILTER_CFG table=filter:70 family=10 entries=1 op=nft_register_chain pid=1923 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.462000 audit[1923]: SYSCALL arch=c000003e syscall=46 success=yes exit=104 a0=3 a1=7ffd55307b30 a2=0 a3=7ffd55307b1c items=0 ppid=1810 pid=1923 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.462000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Oct 2 19:29:16.466000 audit[1925]: NETFILTER_CFG table=filter:71 family=10 entries=1 op=nft_register_rule pid=1925 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.466000 audit[1925]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffca796e720 a2=0 a3=7ffca796e70c items=0 ppid=1810 pid=1925 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.466000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Oct 2 19:29:16.471000 audit[1928]: NETFILTER_CFG table=filter:72 family=10 entries=1 op=nft_register_rule pid=1928 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.471000 audit[1928]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffe66bbc550 a2=0 a3=7ffe66bbc53c items=0 ppid=1810 pid=1928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Oct 2 19:29:16.476000 audit[1931]: NETFILTER_CFG table=filter:73 family=10 entries=1 op=nft_register_rule pid=1931 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.476000 audit[1931]: SYSCALL arch=c000003e syscall=46 success=yes exit=748 a0=3 a1=7ffd44d33680 a2=0 a3=7ffd44d3366c items=0 ppid=1810 pid=1931 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.476000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Oct 2 19:29:16.477000 audit[1932]: NETFILTER_CFG table=nat:74 family=10 entries=1 op=nft_register_chain pid=1932 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.477000 audit[1932]: SYSCALL arch=c000003e syscall=46 success=yes exit=96 a0=3 a1=7ffdc873b250 a2=0 a3=7ffdc873b23c items=0 ppid=1810 pid=1932 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.477000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Oct 2 19:29:16.481000 audit[1934]: NETFILTER_CFG table=nat:75 family=10 entries=2 op=nft_register_chain pid=1934 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.481000 audit[1934]: SYSCALL arch=c000003e syscall=46 success=yes exit=600 a0=3 a1=7fff25eab750 a2=0 a3=7fff25eab73c items=0 ppid=1810 pid=1934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.481000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:29:16.486000 audit[1937]: NETFILTER_CFG table=nat:76 family=10 entries=2 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Oct 2 19:29:16.486000 audit[1937]: SYSCALL arch=c000003e syscall=46 success=yes exit=608 a0=3 a1=7ffe6be77ae0 a2=0 a3=7ffe6be77acc items=0 ppid=1810 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.486000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Oct 2 19:29:16.494000 audit[1941]: NETFILTER_CFG table=filter:77 family=10 entries=3 op=nft_register_rule pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:29:16.494000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=1916 a0=3 a1=7ffd5a3cce30 a2=0 a3=7ffd5a3cce1c items=0 ppid=1810 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.494000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:29:16.494000 audit[1941]: NETFILTER_CFG table=nat:78 family=10 entries=10 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Oct 2 19:29:16.494000 audit[1941]: SYSCALL arch=c000003e syscall=46 success=yes exit=1860 a0=3 a1=7ffd5a3cce30 a2=0 a3=7ffd5a3cce1c items=0 ppid=1810 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:29:16.494000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Oct 2 19:29:16.904314 kubelet[1523]: E1002 19:29:16.904242 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:17.222936 kubelet[1523]: E1002 19:29:17.222600 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:17.505152 kubelet[1523]: W1002 19:29:17.504801 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b.scope WatchSource:0}: container "5b237218461ad7d8ce618e9aae995fda1f2ab099d7df1d19af0640e1a428338b" in namespace "k8s.io": not found Oct 2 19:29:17.905479 kubelet[1523]: E1002 19:29:17.905399 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:18.906245 kubelet[1523]: E1002 19:29:18.906174 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.906451 kubelet[1523]: E1002 19:29:19.906355 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:19.982724 kubelet[1523]: E1002 19:29:19.982680 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:20.614124 kubelet[1523]: W1002 19:29:20.614069 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586.scope WatchSource:0}: task 18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586 not found: not found Oct 2 19:29:20.907168 kubelet[1523]: E1002 19:29:20.907002 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:21.907667 kubelet[1523]: E1002 19:29:21.907590 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:22.908730 kubelet[1523]: E1002 19:29:22.908665 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:23.909772 kubelet[1523]: E1002 19:29:23.909689 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.849416 kubelet[1523]: E1002 19:29:24.849345 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.910705 kubelet[1523]: E1002 19:29:24.910633 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:24.983177 kubelet[1523]: E1002 19:29:24.983118 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:25.911527 kubelet[1523]: E1002 19:29:25.911458 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:26.912645 kubelet[1523]: E1002 19:29:26.912574 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:27.913091 kubelet[1523]: E1002 19:29:27.913022 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:28.913257 kubelet[1523]: E1002 19:29:28.913184 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.913784 kubelet[1523]: E1002 19:29:29.913712 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:29.984131 kubelet[1523]: E1002 19:29:29.984096 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:30.914019 kubelet[1523]: E1002 19:29:30.913949 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:31.148966 env[1136]: time="2023-10-02T19:29:31.148900258Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:29:31.166517 env[1136]: time="2023-10-02T19:29:31.166223042Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" Oct 2 19:29:31.167278 env[1136]: time="2023-10-02T19:29:31.167235667Z" level=info msg="StartContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" Oct 2 19:29:31.205838 systemd[1]: Started cri-containerd-9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4.scope. Oct 2 19:29:31.224592 systemd[1]: cri-containerd-9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4.scope: Deactivated successfully. Oct 2 19:29:31.330728 env[1136]: time="2023-10-02T19:29:31.330613989Z" level=info msg="shim disconnected" id=9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4 Oct 2 19:29:31.330728 env[1136]: time="2023-10-02T19:29:31.330703774Z" level=warning msg="cleaning up after shim disconnected" id=9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4 namespace=k8s.io Oct 2 19:29:31.330728 env[1136]: time="2023-10-02T19:29:31.330720230Z" level=info msg="cleaning up dead shim" Oct 2 19:29:31.343046 env[1136]: time="2023-10-02T19:29:31.342977114Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:29:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=1966 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:29:31.343408 env[1136]: time="2023-10-02T19:29:31.343327372Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:29:31.343679 env[1136]: time="2023-10-02T19:29:31.343617738Z" level=error msg="Failed to pipe stderr of container \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" error="reading from a closed fifo" Oct 2 19:29:31.343862 env[1136]: time="2023-10-02T19:29:31.343638232Z" level=error msg="Failed to pipe stdout of container \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" error="reading from a closed fifo" Oct 2 19:29:31.346413 env[1136]: time="2023-10-02T19:29:31.346347759Z" level=error msg="StartContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:29:31.346685 kubelet[1523]: E1002 19:29:31.346629 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4" Oct 2 19:29:31.346951 kubelet[1523]: E1002 19:29:31.346788 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:29:31.346951 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:29:31.346951 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:29:31.346951 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:29:31.347297 kubelet[1523]: E1002 19:29:31.346874 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:31.914623 kubelet[1523]: E1002 19:29:31.914551 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:32.160721 systemd[1]: run-containerd-runc-k8s.io-9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4-runc.VKvdQI.mount: Deactivated successfully. Oct 2 19:29:32.160891 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4-rootfs.mount: Deactivated successfully. Oct 2 19:29:32.254749 kubelet[1523]: I1002 19:29:32.254208 1523 scope.go:115] "RemoveContainer" containerID="18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586" Oct 2 19:29:32.254749 kubelet[1523]: I1002 19:29:32.254597 1523 scope.go:115] "RemoveContainer" containerID="18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586" Oct 2 19:29:32.256715 env[1136]: time="2023-10-02T19:29:32.256646195Z" level=info msg="RemoveContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" Oct 2 19:29:32.257230 env[1136]: time="2023-10-02T19:29:32.257207965Z" level=info msg="RemoveContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\"" Oct 2 19:29:32.257383 env[1136]: time="2023-10-02T19:29:32.257306977Z" level=error msg="RemoveContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\" failed" error="failed to set removing state for container \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\": container is already in removing state" Oct 2 19:29:32.257625 kubelet[1523]: E1002 19:29:32.257600 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\": container is already in removing state" containerID="18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586" Oct 2 19:29:32.257759 kubelet[1523]: E1002 19:29:32.257644 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586": container is already in removing state; Skipping pod "cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)" Oct 2 19:29:32.258867 kubelet[1523]: E1002 19:29:32.258197 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:32.262488 env[1136]: time="2023-10-02T19:29:32.262442961Z" level=info msg="RemoveContainer for \"18f9d3bebbd521f3dc49b2f27596fa84215de2d02b0328fe6e7ce596059af586\" returns successfully" Oct 2 19:29:32.915447 kubelet[1523]: E1002 19:29:32.915371 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:33.916146 kubelet[1523]: E1002 19:29:33.916073 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.436423 kubelet[1523]: W1002 19:29:34.436361 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4.scope WatchSource:0}: task 9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4 not found: not found Oct 2 19:29:34.916960 kubelet[1523]: E1002 19:29:34.916892 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:34.984983 kubelet[1523]: E1002 19:29:34.984944 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:35.918015 kubelet[1523]: E1002 19:29:35.917940 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:36.918730 kubelet[1523]: E1002 19:29:36.918654 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:37.919521 kubelet[1523]: E1002 19:29:37.919449 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:38.920018 kubelet[1523]: E1002 19:29:38.919946 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.921212 kubelet[1523]: E1002 19:29:39.921142 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:39.986717 kubelet[1523]: E1002 19:29:39.986666 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:40.921772 kubelet[1523]: E1002 19:29:40.921697 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:41.923007 kubelet[1523]: E1002 19:29:41.922918 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:42.924208 kubelet[1523]: E1002 19:29:42.924114 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:43.145770 kubelet[1523]: E1002 19:29:43.145722 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:43.925144 kubelet[1523]: E1002 19:29:43.925074 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.849697 kubelet[1523]: E1002 19:29:44.849622 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.925713 kubelet[1523]: E1002 19:29:44.925645 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:44.988176 kubelet[1523]: E1002 19:29:44.988142 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:45.926615 kubelet[1523]: E1002 19:29:45.926539 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:46.927684 kubelet[1523]: E1002 19:29:46.927612 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:47.928272 kubelet[1523]: E1002 19:29:47.928188 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:48.928893 kubelet[1523]: E1002 19:29:48.928801 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.929559 kubelet[1523]: E1002 19:29:49.929484 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:49.989527 kubelet[1523]: E1002 19:29:49.989487 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:50.930659 kubelet[1523]: E1002 19:29:50.930594 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:51.931538 kubelet[1523]: E1002 19:29:51.931458 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:52.932377 kubelet[1523]: E1002 19:29:52.932289 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:53.933331 kubelet[1523]: E1002 19:29:53.933253 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.933519 kubelet[1523]: E1002 19:29:54.933441 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:54.990189 kubelet[1523]: E1002 19:29:54.990141 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:29:55.934405 kubelet[1523]: E1002 19:29:55.934269 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:56.935274 kubelet[1523]: E1002 19:29:56.935197 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:57.149651 env[1136]: time="2023-10-02T19:29:57.149586102Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:29:57.164823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2099248963.mount: Deactivated successfully. Oct 2 19:29:57.177054 env[1136]: time="2023-10-02T19:29:57.176970587Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" Oct 2 19:29:57.178158 env[1136]: time="2023-10-02T19:29:57.178098063Z" level=info msg="StartContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" Oct 2 19:29:57.206436 systemd[1]: Started cri-containerd-afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd.scope. Oct 2 19:29:57.223009 systemd[1]: cri-containerd-afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd.scope: Deactivated successfully. Oct 2 19:29:57.242095 env[1136]: time="2023-10-02T19:29:57.241991001Z" level=info msg="shim disconnected" id=afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd Oct 2 19:29:57.242095 env[1136]: time="2023-10-02T19:29:57.242067931Z" level=warning msg="cleaning up after shim disconnected" id=afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd namespace=k8s.io Oct 2 19:29:57.242095 env[1136]: time="2023-10-02T19:29:57.242084330Z" level=info msg="cleaning up dead shim" Oct 2 19:29:57.254788 env[1136]: time="2023-10-02T19:29:57.254697128Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:29:57Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2009 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:29:57Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:29:57.255175 env[1136]: time="2023-10-02T19:29:57.255091280Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:29:57.258947 env[1136]: time="2023-10-02T19:29:57.258868416Z" level=error msg="Failed to pipe stdout of container \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" error="reading from a closed fifo" Oct 2 19:29:57.261014 env[1136]: time="2023-10-02T19:29:57.260949534Z" level=error msg="Failed to pipe stderr of container \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" error="reading from a closed fifo" Oct 2 19:29:57.264046 env[1136]: time="2023-10-02T19:29:57.263927724Z" level=error msg="StartContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:29:57.264283 kubelet[1523]: E1002 19:29:57.264252 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd" Oct 2 19:29:57.264447 kubelet[1523]: E1002 19:29:57.264407 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:29:57.264447 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:29:57.264447 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:29:57.264447 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:29:57.264751 kubelet[1523]: E1002 19:29:57.264467 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:57.305376 kubelet[1523]: I1002 19:29:57.305344 1523 scope.go:115] "RemoveContainer" containerID="9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4" Oct 2 19:29:57.305907 kubelet[1523]: I1002 19:29:57.305852 1523 scope.go:115] "RemoveContainer" containerID="9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4" Oct 2 19:29:57.307952 env[1136]: time="2023-10-02T19:29:57.307905275Z" level=info msg="RemoveContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" Oct 2 19:29:57.308368 env[1136]: time="2023-10-02T19:29:57.308328850Z" level=info msg="RemoveContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\"" Oct 2 19:29:57.308492 env[1136]: time="2023-10-02T19:29:57.308440015Z" level=error msg="RemoveContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\" failed" error="failed to set removing state for container \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\": container is already in removing state" Oct 2 19:29:57.308658 kubelet[1523]: E1002 19:29:57.308632 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\": container is already in removing state" containerID="9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4" Oct 2 19:29:57.308782 kubelet[1523]: I1002 19:29:57.308687 1523 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4} err="rpc error: code = Unknown desc = failed to set removing state for container \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\": container is already in removing state" Oct 2 19:29:57.312377 env[1136]: time="2023-10-02T19:29:57.312320724Z" level=info msg="RemoveContainer for \"9f10c2e030c7ab0ed695b3b0963b092364a513797095a6f29bb21e0cf6b5a7b4\" returns successfully" Oct 2 19:29:57.312989 kubelet[1523]: E1002 19:29:57.312954 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:29:57.935794 kubelet[1523]: E1002 19:29:57.935723 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:58.161244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd-rootfs.mount: Deactivated successfully. Oct 2 19:29:58.936234 kubelet[1523]: E1002 19:29:58.936162 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.937412 kubelet[1523]: E1002 19:29:59.937333 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:29:59.991466 kubelet[1523]: E1002 19:29:59.991426 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:00.348772 kubelet[1523]: W1002 19:30:00.348691 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd.scope WatchSource:0}: task afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd not found: not found Oct 2 19:30:00.937539 kubelet[1523]: E1002 19:30:00.937470 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:01.938301 kubelet[1523]: E1002 19:30:01.938229 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:02.939050 kubelet[1523]: E1002 19:30:02.938972 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:03.939516 kubelet[1523]: E1002 19:30:03.939448 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.850148 kubelet[1523]: E1002 19:30:04.850096 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.939727 kubelet[1523]: E1002 19:30:04.939662 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:04.993186 kubelet[1523]: E1002 19:30:04.993141 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:05.940860 kubelet[1523]: E1002 19:30:05.940776 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:06.941776 kubelet[1523]: E1002 19:30:06.941699 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:07.942504 kubelet[1523]: E1002 19:30:07.942435 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:08.942979 kubelet[1523]: E1002 19:30:08.942909 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.943975 kubelet[1523]: E1002 19:30:09.943895 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:09.994394 kubelet[1523]: E1002 19:30:09.994324 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:10.944724 kubelet[1523]: E1002 19:30:10.944649 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:11.945642 kubelet[1523]: E1002 19:30:11.945579 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:12.145122 kubelet[1523]: E1002 19:30:12.145072 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:30:12.946241 kubelet[1523]: E1002 19:30:12.946166 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:13.946662 kubelet[1523]: E1002 19:30:13.946586 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.947169 kubelet[1523]: E1002 19:30:14.947101 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:14.995638 kubelet[1523]: E1002 19:30:14.995607 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:15.948236 kubelet[1523]: E1002 19:30:15.948166 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:16.949246 kubelet[1523]: E1002 19:30:16.949164 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:17.950275 kubelet[1523]: E1002 19:30:17.950206 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:18.950826 kubelet[1523]: E1002 19:30:18.950747 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.951284 kubelet[1523]: E1002 19:30:19.951181 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:19.997248 kubelet[1523]: E1002 19:30:19.997202 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:20.952072 kubelet[1523]: E1002 19:30:20.952006 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:21.953219 kubelet[1523]: E1002 19:30:21.953144 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:22.953823 kubelet[1523]: E1002 19:30:22.953742 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:23.954823 kubelet[1523]: E1002 19:30:23.954739 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.850365 kubelet[1523]: E1002 19:30:24.850292 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.955063 kubelet[1523]: E1002 19:30:24.954995 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:24.998201 kubelet[1523]: E1002 19:30:24.998163 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:25.955627 kubelet[1523]: E1002 19:30:25.955545 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:26.145332 kubelet[1523]: E1002 19:30:26.145270 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:30:26.956140 kubelet[1523]: E1002 19:30:26.956065 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:27.957245 kubelet[1523]: E1002 19:30:27.957170 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:28.958100 kubelet[1523]: E1002 19:30:28.958029 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.958983 kubelet[1523]: E1002 19:30:29.958906 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:29.999295 kubelet[1523]: E1002 19:30:29.999240 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:30.959755 kubelet[1523]: E1002 19:30:30.959686 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:31.960918 kubelet[1523]: E1002 19:30:31.960846 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:32.961722 kubelet[1523]: E1002 19:30:32.961648 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:33.962719 kubelet[1523]: E1002 19:30:33.962645 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:34.963441 kubelet[1523]: E1002 19:30:34.963374 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:35.000484 kubelet[1523]: E1002 19:30:35.000443 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:35.964602 kubelet[1523]: E1002 19:30:35.964521 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:36.965480 kubelet[1523]: E1002 19:30:36.965410 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:37.965990 kubelet[1523]: E1002 19:30:37.965917 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:38.967188 kubelet[1523]: E1002 19:30:38.967109 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:39.968210 kubelet[1523]: E1002 19:30:39.968143 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:40.002275 kubelet[1523]: E1002 19:30:40.002222 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:40.148244 env[1136]: time="2023-10-02T19:30:40.148180515Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:4,}" Oct 2 19:30:40.164110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3135031943.mount: Deactivated successfully. Oct 2 19:30:40.174413 env[1136]: time="2023-10-02T19:30:40.174339008Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:4,} returns container id \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" Oct 2 19:30:40.175528 env[1136]: time="2023-10-02T19:30:40.175487700Z" level=info msg="StartContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" Oct 2 19:30:40.204851 systemd[1]: Started cri-containerd-49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183.scope. Oct 2 19:30:40.219439 systemd[1]: cri-containerd-49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183.scope: Deactivated successfully. Oct 2 19:30:40.237157 env[1136]: time="2023-10-02T19:30:40.237067596Z" level=info msg="shim disconnected" id=49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183 Oct 2 19:30:40.237157 env[1136]: time="2023-10-02T19:30:40.237137432Z" level=warning msg="cleaning up after shim disconnected" id=49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183 namespace=k8s.io Oct 2 19:30:40.237157 env[1136]: time="2023-10-02T19:30:40.237154591Z" level=info msg="cleaning up dead shim" Oct 2 19:30:40.250207 env[1136]: time="2023-10-02T19:30:40.250115746Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:30:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2050 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:30:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:30:40.250573 env[1136]: time="2023-10-02T19:30:40.250490512Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:30:40.253961 env[1136]: time="2023-10-02T19:30:40.253886130Z" level=error msg="Failed to pipe stdout of container \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" error="reading from a closed fifo" Oct 2 19:30:40.257138 env[1136]: time="2023-10-02T19:30:40.257054894Z" level=error msg="Failed to pipe stderr of container \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" error="reading from a closed fifo" Oct 2 19:30:40.259979 env[1136]: time="2023-10-02T19:30:40.259892562Z" level=error msg="StartContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:30:40.260246 kubelet[1523]: E1002 19:30:40.260195 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183" Oct 2 19:30:40.260411 kubelet[1523]: E1002 19:30:40.260339 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:30:40.260411 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:30:40.260411 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:30:40.260411 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:30:40.260704 kubelet[1523]: E1002 19:30:40.260403 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:30:40.389931 kubelet[1523]: I1002 19:30:40.389899 1523 scope.go:115] "RemoveContainer" containerID="afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd" Oct 2 19:30:40.390534 kubelet[1523]: I1002 19:30:40.390313 1523 scope.go:115] "RemoveContainer" containerID="afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd" Oct 2 19:30:40.391680 env[1136]: time="2023-10-02T19:30:40.391589772Z" level=info msg="RemoveContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" Oct 2 19:30:40.392468 env[1136]: time="2023-10-02T19:30:40.392425543Z" level=info msg="RemoveContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\"" Oct 2 19:30:40.392607 env[1136]: time="2023-10-02T19:30:40.392558367Z" level=error msg="RemoveContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\" failed" error="failed to set removing state for container \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\": container is already in removing state" Oct 2 19:30:40.392863 kubelet[1523]: E1002 19:30:40.392801 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\": container is already in removing state" containerID="afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd" Oct 2 19:30:40.392863 kubelet[1523]: E1002 19:30:40.392860 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd": container is already in removing state; Skipping pod "cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)" Oct 2 19:30:40.393289 kubelet[1523]: E1002 19:30:40.393242 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:30:40.396482 env[1136]: time="2023-10-02T19:30:40.396441492Z" level=info msg="RemoveContainer for \"afa0cc417644d08d84e9c59751e2b90d245cbe8b9dbefc4580681dbf734d34dd\" returns successfully" Oct 2 19:30:40.969230 kubelet[1523]: E1002 19:30:40.969162 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:41.159821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183-rootfs.mount: Deactivated successfully. Oct 2 19:30:41.969440 kubelet[1523]: E1002 19:30:41.969369 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:42.970543 kubelet[1523]: E1002 19:30:42.970470 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:43.342361 kubelet[1523]: W1002 19:30:43.342284 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183.scope WatchSource:0}: task 49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183 not found: not found Oct 2 19:30:43.971083 kubelet[1523]: E1002 19:30:43.971012 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.849992 kubelet[1523]: E1002 19:30:44.849922 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:44.971611 kubelet[1523]: E1002 19:30:44.971542 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:45.003915 kubelet[1523]: E1002 19:30:45.003873 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:45.971975 kubelet[1523]: E1002 19:30:45.971903 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:46.972172 kubelet[1523]: E1002 19:30:46.972096 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:47.972483 kubelet[1523]: E1002 19:30:47.972331 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:48.972632 kubelet[1523]: E1002 19:30:48.972541 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:49.973108 kubelet[1523]: E1002 19:30:49.973034 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:50.005535 kubelet[1523]: E1002 19:30:50.005487 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:50.973864 kubelet[1523]: E1002 19:30:50.973771 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:51.974623 kubelet[1523]: E1002 19:30:51.974547 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:52.975303 kubelet[1523]: E1002 19:30:52.975226 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:53.146095 kubelet[1523]: E1002 19:30:53.145539 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:30:53.805039 update_engine[1124]: I1002 19:30:53.804932 1124 prefs.cc:51] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Oct 2 19:30:53.805039 update_engine[1124]: I1002 19:30:53.804996 1124 prefs.cc:51] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Oct 2 19:30:53.806073 update_engine[1124]: I1002 19:30:53.806027 1124 prefs.cc:51] aleph-version not present in /var/lib/update_engine/prefs Oct 2 19:30:53.806726 update_engine[1124]: I1002 19:30:53.806651 1124 omaha_request_params.cc:62] Current group set to lts Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.806914 1124 update_attempter.cc:495] Already updated boot flags. Skipping. Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.806934 1124 update_attempter.cc:638] Scheduling an action processor start. Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.806959 1124 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.807003 1124 prefs.cc:51] previous-version not present in /var/lib/update_engine/prefs Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.807085 1124 omaha_request_action.cc:268] Posting an Omaha request to https://public.update.flatcar-linux.net/v1/update/ Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.807093 1124 omaha_request_action.cc:269] Request: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: Oct 2 19:30:53.807112 update_engine[1124]: I1002 19:30:53.807101 1124 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Oct 2 19:30:53.808408 locksmithd[1170]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Oct 2 19:30:53.809351 update_engine[1124]: I1002 19:30:53.809300 1124 libcurl_http_fetcher.cc:174] Setting up curl options for HTTPS Oct 2 19:30:53.809584 update_engine[1124]: I1002 19:30:53.809545 1124 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Oct 2 19:30:53.976360 kubelet[1523]: E1002 19:30:53.976276 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:54.949840 update_engine[1124]: I1002 19:30:54.949702 1124 prefs.cc:51] update-server-cert-0-2 not present in /var/lib/update_engine/prefs Oct 2 19:30:54.950374 update_engine[1124]: I1002 19:30:54.950095 1124 prefs.cc:51] update-server-cert-0-1 not present in /var/lib/update_engine/prefs Oct 2 19:30:54.950374 update_engine[1124]: I1002 19:30:54.950315 1124 prefs.cc:51] update-server-cert-0-0 not present in /var/lib/update_engine/prefs Oct 2 19:30:54.977025 kubelet[1523]: E1002 19:30:54.976946 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:55.006571 kubelet[1523]: E1002 19:30:55.006521 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:30:55.202552 update_engine[1124]: I1002 19:30:55.201972 1124 libcurl_http_fetcher.cc:263] HTTP response code: 200 Oct 2 19:30:55.203806 update_engine[1124]: I1002 19:30:55.203738 1124 libcurl_http_fetcher.cc:320] Transfer completed (200), 314 bytes downloaded Oct 2 19:30:55.203806 update_engine[1124]: I1002 19:30:55.203770 1124 omaha_request_action.cc:619] Omaha request response: Oct 2 19:30:55.203806 update_engine[1124]: Oct 2 19:30:55.212866 update_engine[1124]: I1002 19:30:55.212790 1124 omaha_request_action.cc:409] No update. Oct 2 19:30:55.212866 update_engine[1124]: I1002 19:30:55.212873 1124 action_processor.cc:82] ActionProcessor::ActionComplete: finished OmahaRequestAction, starting OmahaResponseHandlerAction Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212884 1124 omaha_response_handler_action.cc:36] There are no updates. Aborting. Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212891 1124 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaResponseHandlerAction action failed. Aborting processing. Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212898 1124 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaResponseHandlerAction Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212904 1124 update_attempter.cc:302] Processing Done. Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212927 1124 update_attempter.cc:338] No update. Oct 2 19:30:55.213117 update_engine[1124]: I1002 19:30:55.212944 1124 update_check_scheduler.cc:74] Next update check in 45m49s Oct 2 19:30:55.213451 locksmithd[1170]: LastCheckedTime=1696275055 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Oct 2 19:30:55.977433 kubelet[1523]: E1002 19:30:55.977355 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:56.978194 kubelet[1523]: E1002 19:30:56.978105 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:57.978638 kubelet[1523]: E1002 19:30:57.978537 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:58.979035 kubelet[1523]: E1002 19:30:58.978960 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:30:59.979272 kubelet[1523]: E1002 19:30:59.979195 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:00.008236 kubelet[1523]: E1002 19:31:00.008173 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:00.979591 kubelet[1523]: E1002 19:31:00.979506 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:01.980303 kubelet[1523]: E1002 19:31:01.980230 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:02.981037 kubelet[1523]: E1002 19:31:02.980941 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:03.981976 kubelet[1523]: E1002 19:31:03.981896 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.849335 kubelet[1523]: E1002 19:31:04.849249 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:04.982358 kubelet[1523]: E1002 19:31:04.982288 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:05.009890 kubelet[1523]: E1002 19:31:05.009797 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:05.983345 kubelet[1523]: E1002 19:31:05.983270 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:06.984031 kubelet[1523]: E1002 19:31:06.983950 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:07.145479 kubelet[1523]: E1002 19:31:07.145198 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:31:07.985154 kubelet[1523]: E1002 19:31:07.985076 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:08.986050 kubelet[1523]: E1002 19:31:08.985976 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:09.986307 kubelet[1523]: E1002 19:31:09.986235 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:10.010627 kubelet[1523]: E1002 19:31:10.010586 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:10.987335 kubelet[1523]: E1002 19:31:10.987263 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:11.987745 kubelet[1523]: E1002 19:31:11.987575 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:12.988403 kubelet[1523]: E1002 19:31:12.988337 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:13.988798 kubelet[1523]: E1002 19:31:13.988725 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:14.989952 kubelet[1523]: E1002 19:31:14.989877 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:15.011586 kubelet[1523]: E1002 19:31:15.011531 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:15.990587 kubelet[1523]: E1002 19:31:15.990515 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:16.991694 kubelet[1523]: E1002 19:31:16.991616 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:17.992412 kubelet[1523]: E1002 19:31:17.992333 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:18.992599 kubelet[1523]: E1002 19:31:18.992522 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:19.147162 kubelet[1523]: E1002 19:31:19.147118 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:31:19.993577 kubelet[1523]: E1002 19:31:19.993496 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:20.012488 kubelet[1523]: E1002 19:31:20.012455 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:20.994406 kubelet[1523]: E1002 19:31:20.994337 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:21.995020 kubelet[1523]: E1002 19:31:21.994947 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:22.995436 kubelet[1523]: E1002 19:31:22.995363 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:23.996184 kubelet[1523]: E1002 19:31:23.996109 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.849647 kubelet[1523]: E1002 19:31:24.849561 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:24.996802 kubelet[1523]: E1002 19:31:24.996714 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:25.014266 kubelet[1523]: E1002 19:31:25.014217 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:25.998161 kubelet[1523]: E1002 19:31:25.998019 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:26.998552 kubelet[1523]: E1002 19:31:26.998471 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:27.999179 kubelet[1523]: E1002 19:31:27.999104 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:28.999474 kubelet[1523]: E1002 19:31:28.999392 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.000008 kubelet[1523]: E1002 19:31:29.999937 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:30.015922 kubelet[1523]: E1002 19:31:30.015866 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:31.000657 kubelet[1523]: E1002 19:31:31.000583 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:31.145946 kubelet[1523]: E1002 19:31:31.145901 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:31:32.001091 kubelet[1523]: E1002 19:31:32.001030 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:33.001337 kubelet[1523]: E1002 19:31:33.001258 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:34.001536 kubelet[1523]: E1002 19:31:34.001461 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.001702 kubelet[1523]: E1002 19:31:35.001636 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:35.017459 kubelet[1523]: E1002 19:31:35.017421 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:36.002375 kubelet[1523]: E1002 19:31:36.002300 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:37.003195 kubelet[1523]: E1002 19:31:37.003122 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:38.004292 kubelet[1523]: E1002 19:31:38.004218 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:39.004583 kubelet[1523]: E1002 19:31:39.004496 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.005590 kubelet[1523]: E1002 19:31:40.005512 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:40.018511 kubelet[1523]: E1002 19:31:40.018480 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:41.006027 kubelet[1523]: E1002 19:31:41.005946 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.006418 kubelet[1523]: E1002 19:31:42.006344 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:42.145225 kubelet[1523]: E1002 19:31:42.145167 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:31:43.007074 kubelet[1523]: E1002 19:31:43.006984 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.008204 kubelet[1523]: E1002 19:31:44.008137 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:44.850203 kubelet[1523]: E1002 19:31:44.850123 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.008354 kubelet[1523]: E1002 19:31:45.008298 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:45.019991 kubelet[1523]: E1002 19:31:45.019927 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:46.008583 kubelet[1523]: E1002 19:31:46.008510 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:47.009452 kubelet[1523]: E1002 19:31:47.009373 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:48.010511 kubelet[1523]: E1002 19:31:48.010434 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:49.011057 kubelet[1523]: E1002 19:31:49.010989 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.011305 kubelet[1523]: E1002 19:31:50.011232 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:50.021100 kubelet[1523]: E1002 19:31:50.021063 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:51.011993 kubelet[1523]: E1002 19:31:51.011915 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:52.012803 kubelet[1523]: E1002 19:31:52.012725 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:53.013837 kubelet[1523]: E1002 19:31:53.013736 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:54.014450 kubelet[1523]: E1002 19:31:54.014375 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.015604 kubelet[1523]: E1002 19:31:55.015525 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:55.022571 kubelet[1523]: E1002 19:31:55.022507 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:31:56.016458 kubelet[1523]: E1002 19:31:56.016384 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.017038 kubelet[1523]: E1002 19:31:57.016929 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:57.145128 kubelet[1523]: E1002 19:31:57.145083 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:31:58.017743 kubelet[1523]: E1002 19:31:58.017662 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:31:59.018707 kubelet[1523]: E1002 19:31:59.018635 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.019868 kubelet[1523]: E1002 19:32:00.019765 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:00.023860 kubelet[1523]: E1002 19:32:00.023796 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:01.020117 kubelet[1523]: E1002 19:32:01.020043 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:02.021249 kubelet[1523]: E1002 19:32:02.021197 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:03.021986 kubelet[1523]: E1002 19:32:03.021908 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.022365 kubelet[1523]: E1002 19:32:04.022294 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:04.849556 kubelet[1523]: E1002 19:32:04.849491 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.023566 kubelet[1523]: E1002 19:32:05.023487 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:05.025495 kubelet[1523]: E1002 19:32:05.025460 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:06.024273 kubelet[1523]: E1002 19:32:06.024193 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:07.025321 kubelet[1523]: E1002 19:32:07.025268 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.025544 kubelet[1523]: E1002 19:32:08.025471 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:08.147789 env[1136]: time="2023-10-02T19:32:08.147701074Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:5,}" Oct 2 19:32:08.165124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount263267066.mount: Deactivated successfully. Oct 2 19:32:08.169941 env[1136]: time="2023-10-02T19:32:08.169879620Z" level=info msg="CreateContainer within sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" for &ContainerMetadata{Name:mount-cgroup,Attempt:5,} returns container id \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\"" Oct 2 19:32:08.171036 env[1136]: time="2023-10-02T19:32:08.170982424Z" level=info msg="StartContainer for \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\"" Oct 2 19:32:08.205028 systemd[1]: Started cri-containerd-931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2.scope. Oct 2 19:32:08.219134 systemd[1]: cri-containerd-931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2.scope: Deactivated successfully. Oct 2 19:32:08.228022 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2-rootfs.mount: Deactivated successfully. Oct 2 19:32:08.240748 env[1136]: time="2023-10-02T19:32:08.240664751Z" level=info msg="shim disconnected" id=931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2 Oct 2 19:32:08.240748 env[1136]: time="2023-10-02T19:32:08.240746484Z" level=warning msg="cleaning up after shim disconnected" id=931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2 namespace=k8s.io Oct 2 19:32:08.240748 env[1136]: time="2023-10-02T19:32:08.240760939Z" level=info msg="cleaning up dead shim" Oct 2 19:32:08.254543 env[1136]: time="2023-10-02T19:32:08.254434164Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2103 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:08Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:08.254959 env[1136]: time="2023-10-02T19:32:08.254872903Z" level=error msg="copy shim log" error="read /proc/self/fd/23: file already closed" Oct 2 19:32:08.256657 env[1136]: time="2023-10-02T19:32:08.256586261Z" level=error msg="Failed to pipe stdout of container \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\"" error="reading from a closed fifo" Oct 2 19:32:08.258959 env[1136]: time="2023-10-02T19:32:08.258896628Z" level=error msg="Failed to pipe stderr of container \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\"" error="reading from a closed fifo" Oct 2 19:32:08.262034 env[1136]: time="2023-10-02T19:32:08.261788080Z" level=error msg="StartContainer for \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:08.262350 kubelet[1523]: E1002 19:32:08.262317 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2" Oct 2 19:32:08.262591 kubelet[1523]: E1002 19:32:08.262479 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:08.262591 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:08.262591 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:32:08.262591 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-lrllg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:08.262989 kubelet[1523]: E1002 19:32:08.262616 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:32:08.553555 kubelet[1523]: I1002 19:32:08.553506 1523 scope.go:115] "RemoveContainer" containerID="49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183" Oct 2 19:32:08.554073 kubelet[1523]: I1002 19:32:08.554046 1523 scope.go:115] "RemoveContainer" containerID="49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183" Oct 2 19:32:08.555689 env[1136]: time="2023-10-02T19:32:08.555640694Z" level=info msg="RemoveContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" Oct 2 19:32:08.556383 env[1136]: time="2023-10-02T19:32:08.556342724Z" level=info msg="RemoveContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\"" Oct 2 19:32:08.556508 env[1136]: time="2023-10-02T19:32:08.556457542Z" level=error msg="RemoveContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\" failed" error="failed to set removing state for container \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\": container is already in removing state" Oct 2 19:32:08.556703 kubelet[1523]: E1002 19:32:08.556681 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\": container is already in removing state" containerID="49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183" Oct 2 19:32:08.556803 kubelet[1523]: E1002 19:32:08.556723 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183": container is already in removing state; Skipping pod "cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)" Oct 2 19:32:08.557162 kubelet[1523]: E1002 19:32:08.557137 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=mount-cgroup pod=cilium-5rfgw_kube-system(3da7fc87-301b-4122-9373-183d45cbc169)\"" pod="kube-system/cilium-5rfgw" podUID=3da7fc87-301b-4122-9373-183d45cbc169 Oct 2 19:32:08.560578 env[1136]: time="2023-10-02T19:32:08.560513203Z" level=info msg="RemoveContainer for \"49b007f3304671d388d4a29f4534ab1dc69c2a3eab05fd10dc7333d87053c183\" returns successfully" Oct 2 19:32:09.026244 kubelet[1523]: E1002 19:32:09.026170 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.026376 kubelet[1523]: E1002 19:32:10.026310 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:10.027059 kubelet[1523]: E1002 19:32:10.026686 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:11.026547 kubelet[1523]: E1002 19:32:11.026477 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:11.347486 kubelet[1523]: W1002 19:32:11.347429 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice/cri-containerd-931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2.scope WatchSource:0}: task 931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2 not found: not found Oct 2 19:32:12.027136 kubelet[1523]: E1002 19:32:12.027058 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:13.028195 kubelet[1523]: E1002 19:32:13.028111 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.029000 kubelet[1523]: E1002 19:32:14.028929 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.212712 env[1136]: time="2023-10-02T19:32:14.212643768Z" level=info msg="StopPodSandbox for \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\"" Oct 2 19:32:14.215878 env[1136]: time="2023-10-02T19:32:14.212747907Z" level=info msg="Container to stop \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:32:14.215206 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097-shm.mount: Deactivated successfully. Oct 2 19:32:14.225533 systemd[1]: cri-containerd-4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097.scope: Deactivated successfully. Oct 2 19:32:14.225000 audit: BPF prog-id=73 op=UNLOAD Oct 2 19:32:14.231466 kernel: kauditd_printk_skb: 165 callbacks suppressed Oct 2 19:32:14.231576 kernel: audit: type=1334 audit(1696275134.225:729): prog-id=73 op=UNLOAD Oct 2 19:32:14.241000 audit: BPF prog-id=76 op=UNLOAD Oct 2 19:32:14.253009 kernel: audit: type=1334 audit(1696275134.241:730): prog-id=76 op=UNLOAD Oct 2 19:32:14.268615 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097-rootfs.mount: Deactivated successfully. Oct 2 19:32:14.286212 env[1136]: time="2023-10-02T19:32:14.285408156Z" level=info msg="shim disconnected" id=4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097 Oct 2 19:32:14.286212 env[1136]: time="2023-10-02T19:32:14.285474222Z" level=warning msg="cleaning up after shim disconnected" id=4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097 namespace=k8s.io Oct 2 19:32:14.286212 env[1136]: time="2023-10-02T19:32:14.285489091Z" level=info msg="cleaning up dead shim" Oct 2 19:32:14.298703 env[1136]: time="2023-10-02T19:32:14.298641087Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2135 runtime=io.containerd.runc.v2\n" Oct 2 19:32:14.299192 env[1136]: time="2023-10-02T19:32:14.299129180Z" level=info msg="TearDown network for sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" successfully" Oct 2 19:32:14.299192 env[1136]: time="2023-10-02T19:32:14.299177513Z" level=info msg="StopPodSandbox for \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" returns successfully" Oct 2 19:32:14.477562 kubelet[1523]: I1002 19:32:14.477488 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-cgroup\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477562 kubelet[1523]: I1002 19:32:14.477561 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrllg\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-kube-api-access-lrllg\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477593 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-bpf-maps\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477621 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-lib-modules\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477653 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da7fc87-301b-4122-9373-183d45cbc169-clustermesh-secrets\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477681 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-kernel\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477709 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-xtables-lock\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.477949 kubelet[1523]: I1002 19:32:14.477739 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-hubble-tls\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477767 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cni-path\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477793 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-hostproc\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477846 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-run\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477881 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-etc-cni-netd\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477917 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-net\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478297 kubelet[1523]: I1002 19:32:14.477952 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da7fc87-301b-4122-9373-183d45cbc169-cilium-config-path\") pod \"3da7fc87-301b-4122-9373-183d45cbc169\" (UID: \"3da7fc87-301b-4122-9373-183d45cbc169\") " Oct 2 19:32:14.478651 kubelet[1523]: W1002 19:32:14.478246 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/3da7fc87-301b-4122-9373-183d45cbc169/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:32:14.481537 kubelet[1523]: I1002 19:32:14.478855 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481537 kubelet[1523]: I1002 19:32:14.478951 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481537 kubelet[1523]: I1002 19:32:14.479719 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481537 kubelet[1523]: I1002 19:32:14.479775 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481537 kubelet[1523]: I1002 19:32:14.481031 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3da7fc87-301b-4122-9373-183d45cbc169-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:32:14.481966 kubelet[1523]: I1002 19:32:14.481346 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cni-path" (OuterVolumeSpecName: "cni-path") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481966 kubelet[1523]: I1002 19:32:14.481383 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-hostproc" (OuterVolumeSpecName: "hostproc") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481966 kubelet[1523]: I1002 19:32:14.481415 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481966 kubelet[1523]: I1002 19:32:14.481444 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.481966 kubelet[1523]: I1002 19:32:14.481474 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.482272 kubelet[1523]: I1002 19:32:14.481709 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:14.495101 kubelet[1523]: I1002 19:32:14.486229 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-kube-api-access-lrllg" (OuterVolumeSpecName: "kube-api-access-lrllg") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "kube-api-access-lrllg". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:32:14.495101 kubelet[1523]: I1002 19:32:14.488669 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3da7fc87-301b-4122-9373-183d45cbc169-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:32:14.495101 kubelet[1523]: I1002 19:32:14.492329 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "3da7fc87-301b-4122-9373-183d45cbc169" (UID: "3da7fc87-301b-4122-9373-183d45cbc169"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:32:14.487514 systemd[1]: var-lib-kubelet-pods-3da7fc87\x2d301b\x2d4122\x2d9373\x2d183d45cbc169-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrllg.mount: Deactivated successfully. Oct 2 19:32:14.494138 systemd[1]: var-lib-kubelet-pods-3da7fc87\x2d301b\x2d4122\x2d9373\x2d183d45cbc169-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:32:14.494291 systemd[1]: var-lib-kubelet-pods-3da7fc87\x2d301b\x2d4122\x2d9373\x2d183d45cbc169-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:32:14.567673 kubelet[1523]: I1002 19:32:14.567541 1523 scope.go:115] "RemoveContainer" containerID="931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2" Oct 2 19:32:14.571301 env[1136]: time="2023-10-02T19:32:14.571238279Z" level=info msg="RemoveContainer for \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\"" Oct 2 19:32:14.574173 systemd[1]: Removed slice kubepods-burstable-pod3da7fc87_301b_4122_9373_183d45cbc169.slice. Oct 2 19:32:14.575846 env[1136]: time="2023-10-02T19:32:14.575756216Z" level=info msg="RemoveContainer for \"931466c84904a4e87cc1ede096b5afa407edc0eb830fc0d09ed12ea1df3b9ef2\" returns successfully" Oct 2 19:32:14.578290 kubelet[1523]: I1002 19:32:14.578236 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-cgroup\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578498 kubelet[1523]: I1002 19:32:14.578475 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-lrllg\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-kube-api-access-lrllg\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578505 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-bpf-maps\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578523 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-lib-modules\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578541 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3da7fc87-301b-4122-9373-183d45cbc169-clustermesh-secrets\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578558 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-kernel\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578576 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-xtables-lock\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578600 kubelet[1523]: I1002 19:32:14.578592 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3da7fc87-301b-4122-9373-183d45cbc169-hubble-tls\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578611 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cni-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578627 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-hostproc\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578644 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-cilium-run\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578661 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-etc-cni-netd\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578679 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3da7fc87-301b-4122-9373-183d45cbc169-host-proc-sys-net\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.578942 kubelet[1523]: I1002 19:32:14.578697 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3da7fc87-301b-4122-9373-183d45cbc169-cilium-config-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:14.605893 kubelet[1523]: I1002 19:32:14.605842 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:32:14.605893 kubelet[1523]: E1002 19:32:14.605916 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: E1002 19:32:14.605931 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: E1002 19:32:14.605942 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: E1002 19:32:14.605953 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.605979 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.605990 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.605999 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: E1002 19:32:14.606021 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: E1002 19:32:14.606033 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.606052 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.606062 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.606207 kubelet[1523]: I1002 19:32:14.606091 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="3da7fc87-301b-4122-9373-183d45cbc169" containerName="mount-cgroup" Oct 2 19:32:14.614043 systemd[1]: Created slice kubepods-burstable-pod83da58b2_6df8_4cff_967c_ea739ac243e0.slice. Oct 2 19:32:14.779762 kubelet[1523]: I1002 19:32:14.779703 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-xtables-lock\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.779762 kubelet[1523]: I1002 19:32:14.779770 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-hubble-tls\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779803 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-cgroup\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779858 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-lib-modules\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779890 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83da58b2-6df8-4cff-967c-ea739ac243e0-clustermesh-secrets\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779924 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-bpf-maps\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779953 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-hostproc\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780070 kubelet[1523]: I1002 19:32:14.779984 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cni-path\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780018 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-etc-cni-netd\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780058 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-config-path\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780099 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-kernel\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780138 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-run\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780197 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-net\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.780426 kubelet[1523]: I1002 19:32:14.780237 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt8qc\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-kube-api-access-qt8qc\") pod \"cilium-wc29j\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " pod="kube-system/cilium-wc29j" Oct 2 19:32:14.923862 env[1136]: time="2023-10-02T19:32:14.923774143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc29j,Uid:83da58b2-6df8-4cff-967c-ea739ac243e0,Namespace:kube-system,Attempt:0,}" Oct 2 19:32:14.945198 env[1136]: time="2023-10-02T19:32:14.945113112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:32:14.945464 env[1136]: time="2023-10-02T19:32:14.945162745Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:32:14.945464 env[1136]: time="2023-10-02T19:32:14.945179294Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:32:14.945464 env[1136]: time="2023-10-02T19:32:14.945408535Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae pid=2161 runtime=io.containerd.runc.v2 Oct 2 19:32:14.963845 systemd[1]: Started cri-containerd-135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae.scope. Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.025151 kernel: audit: type=1400 audit(1696275134.982:731): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.025344 kernel: audit: type=1400 audit(1696275134.982:732): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.027730 kubelet[1523]: E1002 19:32:15.027650 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.038859 kubelet[1523]: E1002 19:32:15.037329 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.054066 env[1136]: time="2023-10-02T19:32:15.054014675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wc29j,Uid:83da58b2-6df8-4cff-967c-ea739ac243e0,Namespace:kube-system,Attempt:0,} returns sandbox id \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:15.057741 env[1136]: time="2023-10-02T19:32:15.057694220Z" level=info msg="CreateContainer within sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:32:15.069555 kernel: audit: type=1400 audit(1696275134.982:733): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.069711 kernel: audit: type=1400 audit(1696275134.982:734): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.069757 kernel: audit: type=1400 audit(1696275134.982:735): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:14.982000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:14.983000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.133901 kernel: audit: type=1400 audit(1696275134.982:736): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.134058 kernel: audit: type=1400 audit(1696275134.983:737): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.134101 kernel: audit: type=1400 audit(1696275134.983:738): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:14.983000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.149457 kubelet[1523]: I1002 19:32:15.149428 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=3da7fc87-301b-4122-9373-183d45cbc169 path="/var/lib/kubelet/pods/3da7fc87-301b-4122-9373-183d45cbc169/volumes" Oct 2 19:32:14.983000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit: BPF prog-id=84 op=LOAD Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2161 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:15.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133353333356330316464386239343630353166626534373639326466 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2161 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:15.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133353333356330316464386239343630353166626534373639326466 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.003000 audit: BPF prog-id=85 op=LOAD Oct 2 19:32:15.003000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c0001979d8 a2=78 a3=c0003ae6f0 items=0 ppid=2161 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:15.003000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133353333356330316464386239343630353166626534373639326466 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit: BPF prog-id=86 op=LOAD Oct 2 19:32:15.024000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=17 a0=5 a1=c000197770 a2=78 a3=c0003ae738 items=0 ppid=2161 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:15.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133353333356330316464386239343630353166626534373639326466 Oct 2 19:32:15.024000 audit: BPF prog-id=86 op=UNLOAD Oct 2 19:32:15.024000 audit: BPF prog-id=85 op=UNLOAD Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { perfmon } for pid=2173 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit[2173]: AVC avc: denied { bpf } for pid=2173 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:15.024000 audit: BPF prog-id=87 op=LOAD Oct 2 19:32:15.024000 audit[2173]: SYSCALL arch=c000003e syscall=321 success=yes exit=15 a0=5 a1=c000197c30 a2=78 a3=c0003aeb48 items=0 ppid=2161 pid=2173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:15.024000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3133353333356330316464386239343630353166626534373639326466 Oct 2 19:32:15.160768 env[1136]: time="2023-10-02T19:32:15.160700511Z" level=info msg="CreateContainer within sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\"" Oct 2 19:32:15.161797 env[1136]: time="2023-10-02T19:32:15.161719068Z" level=info msg="StartContainer for \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\"" Oct 2 19:32:15.184525 systemd[1]: Started cri-containerd-ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3.scope. Oct 2 19:32:15.202042 systemd[1]: cri-containerd-ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3.scope: Deactivated successfully. Oct 2 19:32:15.225775 env[1136]: time="2023-10-02T19:32:15.225498006Z" level=info msg="shim disconnected" id=ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3 Oct 2 19:32:15.226602 env[1136]: time="2023-10-02T19:32:15.226553668Z" level=warning msg="cleaning up after shim disconnected" id=ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3 namespace=k8s.io Oct 2 19:32:15.226743 env[1136]: time="2023-10-02T19:32:15.226721085Z" level=info msg="cleaning up dead shim" Oct 2 19:32:15.239962 env[1136]: time="2023-10-02T19:32:15.239896145Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2222 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:15Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:15.240381 env[1136]: time="2023-10-02T19:32:15.240300902Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Oct 2 19:32:15.242130 env[1136]: time="2023-10-02T19:32:15.242013696Z" level=error msg="Failed to pipe stderr of container \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\"" error="reading from a closed fifo" Oct 2 19:32:15.242507 env[1136]: time="2023-10-02T19:32:15.241908614Z" level=error msg="Failed to pipe stdout of container \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\"" error="reading from a closed fifo" Oct 2 19:32:15.244890 env[1136]: time="2023-10-02T19:32:15.244780148Z" level=error msg="StartContainer for \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:15.245216 kubelet[1523]: E1002 19:32:15.245176 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3" Oct 2 19:32:15.245396 kubelet[1523]: E1002 19:32:15.245327 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:15.245396 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:15.245396 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:32:15.245396 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-qt8qc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-wc29j_kube-system(83da58b2-6df8-4cff-967c-ea739ac243e0): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:15.245706 kubelet[1523]: E1002 19:32:15.245386 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-wc29j" podUID=83da58b2-6df8-4cff-967c-ea739ac243e0 Oct 2 19:32:15.573428 env[1136]: time="2023-10-02T19:32:15.573377216Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:15.576910 env[1136]: time="2023-10-02T19:32:15.573455567Z" level=info msg="Container to stop \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:32:15.575839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae-shm.mount: Deactivated successfully. Oct 2 19:32:15.586026 systemd[1]: cri-containerd-135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae.scope: Deactivated successfully. Oct 2 19:32:15.585000 audit: BPF prog-id=84 op=UNLOAD Oct 2 19:32:15.589000 audit: BPF prog-id=87 op=UNLOAD Oct 2 19:32:15.618571 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae-rootfs.mount: Deactivated successfully. Oct 2 19:32:15.622176 env[1136]: time="2023-10-02T19:32:15.622110294Z" level=info msg="shim disconnected" id=135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae Oct 2 19:32:15.622418 env[1136]: time="2023-10-02T19:32:15.622181390Z" level=warning msg="cleaning up after shim disconnected" id=135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae namespace=k8s.io Oct 2 19:32:15.622418 env[1136]: time="2023-10-02T19:32:15.622197628Z" level=info msg="cleaning up dead shim" Oct 2 19:32:15.634288 env[1136]: time="2023-10-02T19:32:15.634211988Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2252 runtime=io.containerd.runc.v2\n" Oct 2 19:32:15.634719 env[1136]: time="2023-10-02T19:32:15.634665819Z" level=info msg="TearDown network for sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" successfully" Oct 2 19:32:15.634719 env[1136]: time="2023-10-02T19:32:15.634705525Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" returns successfully" Oct 2 19:32:15.793662 kubelet[1523]: I1002 19:32:15.793595 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-hubble-tls\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.793662 kubelet[1523]: I1002 19:32:15.793658 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-lib-modules\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.793662 kubelet[1523]: I1002 19:32:15.793692 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-xtables-lock\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793724 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-cgroup\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793754 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cni-path\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793787 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qt8qc\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-kube-api-access-qt8qc\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793842 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83da58b2-6df8-4cff-967c-ea739ac243e0-clustermesh-secrets\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793868 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-bpf-maps\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794130 kubelet[1523]: I1002 19:32:15.793896 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-hostproc\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.793926 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-kernel\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.793952 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-run\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.793983 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-etc-cni-netd\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.794020 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-config-path\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.794052 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-net\") pod \"83da58b2-6df8-4cff-967c-ea739ac243e0\" (UID: \"83da58b2-6df8-4cff-967c-ea739ac243e0\") " Oct 2 19:32:15.794466 kubelet[1523]: I1002 19:32:15.794123 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.797839 kubelet[1523]: I1002 19:32:15.795122 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.797839 kubelet[1523]: I1002 19:32:15.795193 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-hostproc" (OuterVolumeSpecName: "hostproc") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.797839 kubelet[1523]: I1002 19:32:15.795228 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.797839 kubelet[1523]: I1002 19:32:15.795258 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.797839 kubelet[1523]: I1002 19:32:15.795290 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.798251 kubelet[1523]: W1002 19:32:15.795481 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/83da58b2-6df8-4cff-967c-ea739ac243e0/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:32:15.800404 systemd[1]: var-lib-kubelet-pods-83da58b2\x2d6df8\x2d4cff\x2d967c\x2dea739ac243e0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:32:15.801400 kubelet[1523]: I1002 19:32:15.801353 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.801521 kubelet[1523]: I1002 19:32:15.801422 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.801521 kubelet[1523]: I1002 19:32:15.801449 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.801521 kubelet[1523]: I1002 19:32:15.801476 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cni-path" (OuterVolumeSpecName: "cni-path") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:32:15.802005 kubelet[1523]: I1002 19:32:15.801974 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:32:15.802520 kubelet[1523]: I1002 19:32:15.802491 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:32:15.808305 systemd[1]: var-lib-kubelet-pods-83da58b2\x2d6df8\x2d4cff\x2d967c\x2dea739ac243e0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dqt8qc.mount: Deactivated successfully. Oct 2 19:32:15.809883 kubelet[1523]: I1002 19:32:15.809840 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-kube-api-access-qt8qc" (OuterVolumeSpecName: "kube-api-access-qt8qc") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "kube-api-access-qt8qc". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:32:15.810881 kubelet[1523]: I1002 19:32:15.810831 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/83da58b2-6df8-4cff-967c-ea739ac243e0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "83da58b2-6df8-4cff-967c-ea739ac243e0" (UID: "83da58b2-6df8-4cff-967c-ea739ac243e0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894372 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-etc-cni-netd\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894420 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-config-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894438 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-net\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894452 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-hubble-tls\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894473 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-lib-modules\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894488 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cni-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.894543 kubelet[1523]: I1002 19:32:15.894508 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-qt8qc\" (UniqueName: \"kubernetes.io/projected/83da58b2-6df8-4cff-967c-ea739ac243e0-kube-api-access-qt8qc\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895209 kubelet[1523]: I1002 19:32:15.895179 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-xtables-lock\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895347 kubelet[1523]: I1002 19:32:15.895333 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-cgroup\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895483 kubelet[1523]: I1002 19:32:15.895470 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-cilium-run\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895610 kubelet[1523]: I1002 19:32:15.895598 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/83da58b2-6df8-4cff-967c-ea739ac243e0-clustermesh-secrets\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895736 kubelet[1523]: I1002 19:32:15.895724 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-bpf-maps\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.895877 kubelet[1523]: I1002 19:32:15.895864 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-hostproc\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:15.896012 kubelet[1523]: I1002 19:32:15.895999 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/83da58b2-6df8-4cff-967c-ea739ac243e0-host-proc-sys-kernel\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:32:16.037990 kubelet[1523]: E1002 19:32:16.037925 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:16.215310 systemd[1]: var-lib-kubelet-pods-83da58b2\x2d6df8\x2d4cff\x2d967c\x2dea739ac243e0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:32:16.576499 kubelet[1523]: I1002 19:32:16.576447 1523 scope.go:115] "RemoveContainer" containerID="ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3" Oct 2 19:32:16.579430 env[1136]: time="2023-10-02T19:32:16.579383726Z" level=info msg="RemoveContainer for \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\"" Oct 2 19:32:16.581881 systemd[1]: Removed slice kubepods-burstable-pod83da58b2_6df8_4cff_967c_ea739ac243e0.slice. Oct 2 19:32:16.585304 env[1136]: time="2023-10-02T19:32:16.585182671Z" level=info msg="RemoveContainer for \"ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3\" returns successfully" Oct 2 19:32:17.038922 kubelet[1523]: E1002 19:32:17.038756 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:17.146513 env[1136]: time="2023-10-02T19:32:17.146459127Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:17.147005 env[1136]: time="2023-10-02T19:32:17.146924156Z" level=info msg="TearDown network for sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" successfully" Oct 2 19:32:17.147005 env[1136]: time="2023-10-02T19:32:17.146993084Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" returns successfully" Oct 2 19:32:17.147699 kubelet[1523]: I1002 19:32:17.147673 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=83da58b2-6df8-4cff-967c-ea739ac243e0 path="/var/lib/kubelet/pods/83da58b2-6df8-4cff-967c-ea739ac243e0/volumes" Oct 2 19:32:18.039561 kubelet[1523]: E1002 19:32:18.039500 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:18.333737 kubelet[1523]: W1002 19:32:18.333661 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod83da58b2_6df8_4cff_967c_ea739ac243e0.slice/cri-containerd-ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3.scope WatchSource:0}: container "ffa84275a70bca2b38fc0fe7ef81f13576c38394a2137b7e63c9d6dfd70df4c3" in namespace "k8s.io": not found Oct 2 19:32:19.039749 kubelet[1523]: E1002 19:32:19.039696 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:19.805037 kubelet[1523]: I1002 19:32:19.804988 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:32:19.805321 kubelet[1523]: E1002 19:32:19.805052 1523 cpu_manager.go:394] "RemoveStaleState: removing container" podUID="83da58b2-6df8-4cff-967c-ea739ac243e0" containerName="mount-cgroup" Oct 2 19:32:19.805321 kubelet[1523]: I1002 19:32:19.805080 1523 memory_manager.go:345] "RemoveStaleState removing state" podUID="83da58b2-6df8-4cff-967c-ea739ac243e0" containerName="mount-cgroup" Oct 2 19:32:19.809453 kubelet[1523]: I1002 19:32:19.809418 1523 topology_manager.go:205] "Topology Admit Handler" Oct 2 19:32:19.813641 systemd[1]: Created slice kubepods-besteffort-pod5025e48c_ccfc_46e1_aefd_6f641169e4e7.slice. Oct 2 19:32:19.816291 kubelet[1523]: W1002 19:32:19.815905 1523 reflector.go:424] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.816291 kubelet[1523]: E1002 19:32:19.815964 1523 reflector.go:140] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.816291 kubelet[1523]: W1002 19:32:19.816043 1523 reflector.go:424] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.816291 kubelet[1523]: E1002 19:32:19.816063 1523 reflector.go:140] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.816291 kubelet[1523]: W1002 19:32:19.816124 1523 reflector.go:424] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.817388 kubelet[1523]: E1002 19:32:19.816139 1523 reflector.go:140] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.128.0.55" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.128.0.55' and this object Oct 2 19:32:19.822863 systemd[1]: Created slice kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice. Oct 2 19:32:19.823762 kubelet[1523]: I1002 19:32:19.823731 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bd628\" (UniqueName: \"kubernetes.io/projected/5025e48c-ccfc-46e1-aefd-6f641169e4e7-kube-api-access-bd628\") pod \"cilium-operator-69b677f97c-k652f\" (UID: \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\") " pod="kube-system/cilium-operator-69b677f97c-k652f" Oct 2 19:32:19.823923 kubelet[1523]: I1002 19:32:19.823795 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cni-path\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.823923 kubelet[1523]: I1002 19:32:19.823864 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-etc-cni-netd\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.823923 kubelet[1523]: I1002 19:32:19.823911 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824104 kubelet[1523]: I1002 19:32:19.823989 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-kernel\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824217 kubelet[1523]: I1002 19:32:19.824195 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5025e48c-ccfc-46e1-aefd-6f641169e4e7-cilium-config-path\") pod \"cilium-operator-69b677f97c-k652f\" (UID: \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\") " pod="kube-system/cilium-operator-69b677f97c-k652f" Oct 2 19:32:19.824410 kubelet[1523]: I1002 19:32:19.824251 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-lib-modules\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824506 kubelet[1523]: I1002 19:32:19.824425 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hubble-tls\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824575 kubelet[1523]: I1002 19:32:19.824470 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttd5d\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-kube-api-access-ttd5d\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824692 kubelet[1523]: I1002 19:32:19.824670 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-config-path\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.824787 kubelet[1523]: I1002 19:32:19.824723 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-run\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.825034 kubelet[1523]: I1002 19:32:19.824986 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hostproc\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.825262 kubelet[1523]: I1002 19:32:19.825242 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-cgroup\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.825467 kubelet[1523]: I1002 19:32:19.825322 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-xtables-lock\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.825656 kubelet[1523]: I1002 19:32:19.825495 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-bpf-maps\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.825945 kubelet[1523]: I1002 19:32:19.825696 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-clustermesh-secrets\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:19.826140 kubelet[1523]: I1002 19:32:19.825971 1523 reconciler.go:357] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-net\") pod \"cilium-pq8vz\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " pod="kube-system/cilium-pq8vz" Oct 2 19:32:20.029446 kubelet[1523]: E1002 19:32:20.029377 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:20.040938 kubelet[1523]: E1002 19:32:20.040873 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:20.121961 env[1136]: time="2023-10-02T19:32:20.121900635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-k652f,Uid:5025e48c-ccfc-46e1-aefd-6f641169e4e7,Namespace:kube-system,Attempt:0,}" Oct 2 19:32:20.148591 env[1136]: time="2023-10-02T19:32:20.148376500Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:32:20.148591 env[1136]: time="2023-10-02T19:32:20.148426769Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:32:20.148591 env[1136]: time="2023-10-02T19:32:20.148439085Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:32:20.149027 env[1136]: time="2023-10-02T19:32:20.148960476Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c pid=2280 runtime=io.containerd.runc.v2 Oct 2 19:32:20.175060 systemd[1]: Started cri-containerd-aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c.scope. Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.208828 kernel: kauditd_printk_skb: 51 callbacks suppressed Oct 2 19:32:20.209017 kernel: audit: type=1400 audit(1696275140.202:751): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.270899 kernel: audit: type=1400 audit(1696275140.202:752): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.271054 kernel: audit: type=1400 audit(1696275140.202:753): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.271095 kernel: audit: type=1400 audit(1696275140.202:754): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.291706 kernel: audit: type=1400 audit(1696275140.202:755): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.312491 kernel: audit: type=1400 audit(1696275140.202:756): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.333510 kernel: audit: type=1400 audit(1696275140.202:757): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.358307 kernel: audit: type=1400 audit(1696275140.202:758): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.379335 kernel: audit: type=1400 audit(1696275140.202:759): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.421133 kernel: audit: type=1400 audit(1696275140.202:760): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit: BPF prog-id=88 op=LOAD Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c000197c48 a2=10 a3=1c items=0 ppid=2280 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:20.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161306232313536376437343933373266343232376636666639663265 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001976b0 a2=3c a3=c items=0 ppid=2280 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:20.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161306232313536376437343933373266343232376636666639663265 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.202000 audit: BPF prog-id=89 op=LOAD Oct 2 19:32:20.202000 audit[2291]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001979d8 a2=78 a3=c000298610 items=0 ppid=2280 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:20.202000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161306232313536376437343933373266343232376636666639663265 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.229000 audit: BPF prog-id=90 op=LOAD Oct 2 19:32:20.229000 audit[2291]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c000197770 a2=78 a3=c000327248 items=0 ppid=2280 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:20.229000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161306232313536376437343933373266343232376636666639663265 Oct 2 19:32:20.290000 audit: BPF prog-id=90 op=UNLOAD Oct 2 19:32:20.290000 audit: BPF prog-id=89 op=UNLOAD Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { perfmon } for pid=2291 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit[2291]: AVC avc: denied { bpf } for pid=2291 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:20.290000 audit: BPF prog-id=91 op=LOAD Oct 2 19:32:20.290000 audit[2291]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c000197c30 a2=78 a3=c000327658 items=0 ppid=2280 pid=2291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:20.290000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161306232313536376437343933373266343232376636666639663265 Oct 2 19:32:20.464598 env[1136]: time="2023-10-02T19:32:20.464309260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-69b677f97c-k652f,Uid:5025e48c-ccfc-46e1-aefd-6f641169e4e7,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\"" Oct 2 19:32:20.473228 kubelet[1523]: E1002 19:32:20.473045 1523 gcpcredential.go:74] while reading 'google-dockercfg-url' metadata: http status code: 404 while fetching url http://metadata.google.internal./computeMetadata/v1/instance/attributes/google-dockercfg-url Oct 2 19:32:20.473607 env[1136]: time="2023-10-02T19:32:20.473562419Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\"" Oct 2 19:32:20.928553 kubelet[1523]: E1002 19:32:20.928492 1523 secret.go:192] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Oct 2 19:32:20.928849 kubelet[1523]: E1002 19:32:20.928628 1523 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets podName:8a6f1f0b-4874-4671-832f-7b2fb2379d26 nodeName:}" failed. No retries permitted until 2023-10-02 19:32:21.428595852 +0000 UTC m=+217.650230672 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets") pod "cilium-pq8vz" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26") : failed to sync secret cache: timed out waiting for the condition Oct 2 19:32:20.946162 systemd[1]: run-containerd-runc-k8s.io-aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c-runc.Jv4Gnu.mount: Deactivated successfully. Oct 2 19:32:21.042099 kubelet[1523]: E1002 19:32:21.042026 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:21.374530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2209041777.mount: Deactivated successfully. Oct 2 19:32:21.639769 env[1136]: time="2023-10-02T19:32:21.639251864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pq8vz,Uid:8a6f1f0b-4874-4671-832f-7b2fb2379d26,Namespace:kube-system,Attempt:0,}" Oct 2 19:32:21.685262 env[1136]: time="2023-10-02T19:32:21.685159803Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Oct 2 19:32:21.685262 env[1136]: time="2023-10-02T19:32:21.685230568Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Oct 2 19:32:21.685838 env[1136]: time="2023-10-02T19:32:21.685731476Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Oct 2 19:32:21.686365 env[1136]: time="2023-10-02T19:32:21.686287568Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5 pid=2324 runtime=io.containerd.runc.v2 Oct 2 19:32:21.718148 systemd[1]: Started cri-containerd-2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5.scope. Oct 2 19:32:21.744000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.745000 audit: BPF prog-id=92 op=LOAD Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c0001bdc48 a2=10 a3=1c items=0 ppid=2324 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:21.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373431353561663038626539363731626161373938363964326232 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c0001bd6b0 a2=3c a3=c items=0 ppid=2324 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:21.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373431353561663038626539363731626161373938363964326232 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit: BPF prog-id=93 op=LOAD Oct 2 19:32:21.747000 audit[2334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bd9d8 a2=78 a3=c0000a5020 items=0 ppid=2324 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:21.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373431353561663038626539363731626161373938363964326232 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit: BPF prog-id=94 op=LOAD Oct 2 19:32:21.747000 audit[2334]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c0001bd770 a2=78 a3=c0000a5068 items=0 ppid=2324 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:21.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373431353561663038626539363731626161373938363964326232 Oct 2 19:32:21.747000 audit: BPF prog-id=94 op=UNLOAD Oct 2 19:32:21.747000 audit: BPF prog-id=93 op=UNLOAD Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { perfmon } for pid=2334 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit[2334]: AVC avc: denied { bpf } for pid=2334 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:21.747000 audit: BPF prog-id=95 op=LOAD Oct 2 19:32:21.747000 audit[2334]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c0001bdc30 a2=78 a3=c0000a5478 items=0 ppid=2324 pid=2334 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:21.747000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3239373431353561663038626539363731626161373938363964326232 Oct 2 19:32:21.778406 env[1136]: time="2023-10-02T19:32:21.778343473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-pq8vz,Uid:8a6f1f0b-4874-4671-832f-7b2fb2379d26,Namespace:kube-system,Attempt:0,} returns sandbox id \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\"" Oct 2 19:32:21.782987 env[1136]: time="2023-10-02T19:32:21.782930250Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Oct 2 19:32:21.821248 env[1136]: time="2023-10-02T19:32:21.821190297Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" Oct 2 19:32:21.838799 env[1136]: time="2023-10-02T19:32:21.838734664Z" level=info msg="StartContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" Oct 2 19:32:21.866551 systemd[1]: Started cri-containerd-3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e.scope. Oct 2 19:32:21.889301 systemd[1]: cri-containerd-3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e.scope: Deactivated successfully. Oct 2 19:32:21.953006 env[1136]: time="2023-10-02T19:32:21.952942074Z" level=info msg="shim disconnected" id=3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e Oct 2 19:32:21.953463 env[1136]: time="2023-10-02T19:32:21.953423638Z" level=warning msg="cleaning up after shim disconnected" id=3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e namespace=k8s.io Oct 2 19:32:21.953602 env[1136]: time="2023-10-02T19:32:21.953579369Z" level=info msg="cleaning up dead shim" Oct 2 19:32:21.966877 env[1136]: time="2023-10-02T19:32:21.966779133Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2382 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:21Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:21.967216 env[1136]: time="2023-10-02T19:32:21.967137409Z" level=error msg="copy shim log" error="read /proc/self/fd/50: file already closed" Oct 2 19:32:21.967558 env[1136]: time="2023-10-02T19:32:21.967508767Z" level=error msg="Failed to pipe stderr of container \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" error="reading from a closed fifo" Oct 2 19:32:21.972974 env[1136]: time="2023-10-02T19:32:21.972896842Z" level=error msg="Failed to pipe stdout of container \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" error="reading from a closed fifo" Oct 2 19:32:21.976564 env[1136]: time="2023-10-02T19:32:21.976484405Z" level=error msg="StartContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:21.977059 kubelet[1523]: E1002 19:32:21.977012 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e" Oct 2 19:32:21.977341 kubelet[1523]: E1002 19:32:21.977318 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:21.977341 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:21.977341 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:32:21.977341 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ttd5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:21.977689 kubelet[1523]: E1002 19:32:21.977641 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:22.042626 kubelet[1523]: E1002 19:32:22.042568 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:22.512630 env[1136]: time="2023-10-02T19:32:22.512542616Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:22.519301 env[1136]: time="2023-10-02T19:32:22.519234811Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:22.523125 env[1136]: time="2023-10-02T19:32:22.523060574Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Oct 2 19:32:22.524111 env[1136]: time="2023-10-02T19:32:22.524031665Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1\" returns image reference \"sha256:b7eda471b44d1665b27a56412a479c6baff49461eb4cd7e9886be66da63fd36e\"" Oct 2 19:32:22.527845 env[1136]: time="2023-10-02T19:32:22.527755185Z" level=info msg="CreateContainer within sandbox \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Oct 2 19:32:22.558860 env[1136]: time="2023-10-02T19:32:22.558716431Z" level=info msg="CreateContainer within sandbox \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\"" Oct 2 19:32:22.559991 env[1136]: time="2023-10-02T19:32:22.559934689Z" level=info msg="StartContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\"" Oct 2 19:32:22.603631 systemd[1]: Started cri-containerd-37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34.scope. Oct 2 19:32:22.615894 env[1136]: time="2023-10-02T19:32:22.615839654Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.634000 audit: BPF prog-id=96 op=LOAD Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: SYSCALL arch=c000003e syscall=321 success=yes exit=0 a0=f a1=c00014dc48 a2=10 a3=1c items=0 ppid=2280 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337613230616331616439316461333764376665336532643636306161 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=0 a1=c00014d6b0 a2=3c a3=8 items=0 ppid=2280 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337613230616331616439316461333764376665336532643636306161 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit: BPF prog-id=97 op=LOAD Oct 2 19:32:22.636000 audit[2403]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014d9d8 a2=78 a3=c0003aebf0 items=0 ppid=2280 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337613230616331616439316461333764376665336532643636306161 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit: BPF prog-id=98 op=LOAD Oct 2 19:32:22.636000 audit[2403]: SYSCALL arch=c000003e syscall=321 success=yes exit=18 a0=5 a1=c00014d770 a2=78 a3=c0003aec38 items=0 ppid=2280 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337613230616331616439316461333764376665336532643636306161 Oct 2 19:32:22.636000 audit: BPF prog-id=98 op=UNLOAD Oct 2 19:32:22.636000 audit: BPF prog-id=97 op=UNLOAD Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { perfmon } for pid=2403 comm="runc" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit[2403]: AVC avc: denied { bpf } for pid=2403 comm="runc" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=0 Oct 2 19:32:22.636000 audit: BPF prog-id=99 op=LOAD Oct 2 19:32:22.636000 audit[2403]: SYSCALL arch=c000003e syscall=321 success=yes exit=16 a0=5 a1=c00014dc30 a2=78 a3=c0003af048 items=0 ppid=2280 pid=2403 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/run/torcx/unpack/docker/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Oct 2 19:32:22.636000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337613230616331616439316461333764376665336532643636306161 Oct 2 19:32:22.659975 env[1136]: time="2023-10-02T19:32:22.659888400Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" Oct 2 19:32:22.662195 env[1136]: time="2023-10-02T19:32:22.662146548Z" level=info msg="StartContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" Oct 2 19:32:22.674287 env[1136]: time="2023-10-02T19:32:22.674195953Z" level=info msg="StartContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" returns successfully" Oct 2 19:32:22.702029 systemd[1]: Started cri-containerd-46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17.scope. Oct 2 19:32:22.730225 systemd[1]: cri-containerd-46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17.scope: Deactivated successfully. Oct 2 19:32:22.730000 audit[2414]: AVC avc: denied { map_create } for pid=2414 comm="cilium-operator" scontext=system_u:system_r:svirt_lxc_net_t:s0:c127,c729 tcontext=system_u:system_r:svirt_lxc_net_t:s0:c127,c729 tclass=bpf permissive=0 Oct 2 19:32:22.730000 audit[2414]: SYSCALL arch=c000003e syscall=321 success=no exit=-13 a0=0 a1=c00061f7d0 a2=48 a3=c00061f7c0 items=0 ppid=2280 pid=2414 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="cilium-operator" exe="/usr/bin/cilium-operator-generic" subj=system_u:system_r:svirt_lxc_net_t:s0:c127,c729 key=(null) Oct 2 19:32:22.730000 audit: PROCTITLE proctitle=63696C69756D2D6F70657261746F722D67656E65726963002D2D636F6E6669672D6469723D2F746D702F63696C69756D2F636F6E6669672D6D6170002D2D64656275673D66616C7365 Oct 2 19:32:22.857713 env[1136]: time="2023-10-02T19:32:22.857532387Z" level=info msg="shim disconnected" id=46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17 Oct 2 19:32:22.858118 env[1136]: time="2023-10-02T19:32:22.857728890Z" level=warning msg="cleaning up after shim disconnected" id=46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17 namespace=k8s.io Oct 2 19:32:22.858118 env[1136]: time="2023-10-02T19:32:22.857764910Z" level=info msg="cleaning up dead shim" Oct 2 19:32:22.881497 env[1136]: time="2023-10-02T19:32:22.881403417Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2457 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:22Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:22.881874 env[1136]: time="2023-10-02T19:32:22.881780070Z" level=error msg="copy shim log" error="read /proc/self/fd/57: file already closed" Oct 2 19:32:22.882358 env[1136]: time="2023-10-02T19:32:22.882304991Z" level=error msg="Failed to pipe stderr of container \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" error="reading from a closed fifo" Oct 2 19:32:22.882645 env[1136]: time="2023-10-02T19:32:22.882598901Z" level=error msg="Failed to pipe stdout of container \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" error="reading from a closed fifo" Oct 2 19:32:22.885906 env[1136]: time="2023-10-02T19:32:22.885769658Z" level=error msg="StartContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:22.886947 kubelet[1523]: E1002 19:32:22.886293 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17" Oct 2 19:32:22.886947 kubelet[1523]: E1002 19:32:22.886442 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:22.886947 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:22.886947 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:32:22.887293 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ttd5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:22.887437 kubelet[1523]: E1002 19:32:22.886513 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:22.945779 systemd[1]: run-containerd-runc-k8s.io-37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34-runc.3KhZq6.mount: Deactivated successfully. Oct 2 19:32:23.042871 kubelet[1523]: E1002 19:32:23.042793 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:23.617023 kubelet[1523]: I1002 19:32:23.616974 1523 scope.go:115] "RemoveContainer" containerID="3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e" Oct 2 19:32:23.617558 kubelet[1523]: I1002 19:32:23.617520 1523 scope.go:115] "RemoveContainer" containerID="3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e" Oct 2 19:32:23.620203 env[1136]: time="2023-10-02T19:32:23.619957396Z" level=info msg="RemoveContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" Oct 2 19:32:23.620595 env[1136]: time="2023-10-02T19:32:23.620534445Z" level=info msg="RemoveContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\"" Oct 2 19:32:23.621028 env[1136]: time="2023-10-02T19:32:23.620957199Z" level=error msg="RemoveContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\" failed" error="failed to set removing state for container \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\": container is already in removing state" Oct 2 19:32:23.621313 kubelet[1523]: E1002 19:32:23.621266 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\": container is already in removing state" containerID="3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e" Oct 2 19:32:23.621451 kubelet[1523]: E1002 19:32:23.621332 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e": container is already in removing state; Skipping pod "cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)" Oct 2 19:32:23.622126 kubelet[1523]: E1002 19:32:23.621904 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:23.628561 env[1136]: time="2023-10-02T19:32:23.628489231Z" level=info msg="RemoveContainer for \"3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e\" returns successfully" Oct 2 19:32:24.043368 kubelet[1523]: E1002 19:32:24.043212 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:24.621287 kubelet[1523]: E1002 19:32:24.621250 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 10s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:24.849841 kubelet[1523]: E1002 19:32:24.849776 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.030171 kubelet[1523]: E1002 19:32:25.030037 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:25.044416 kubelet[1523]: E1002 19:32:25.044353 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:25.059372 kubelet[1523]: W1002 19:32:25.059322 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice/cri-containerd-3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e.scope WatchSource:0}: container "3c06f61e185d7d3beee091ecc8b80b0aa3bd70759488c03b3d12a0250479544e" in namespace "k8s.io": not found Oct 2 19:32:26.045599 kubelet[1523]: E1002 19:32:26.045523 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:27.046505 kubelet[1523]: E1002 19:32:27.046428 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.046721 kubelet[1523]: E1002 19:32:28.046653 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:28.174563 kubelet[1523]: W1002 19:32:28.174427 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice/cri-containerd-46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17.scope WatchSource:0}: task 46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17 not found: not found Oct 2 19:32:29.046885 kubelet[1523]: E1002 19:32:29.046822 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:30.030942 kubelet[1523]: E1002 19:32:30.030861 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:30.047305 kubelet[1523]: E1002 19:32:30.047232 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:31.047750 kubelet[1523]: E1002 19:32:31.047677 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:32.047980 kubelet[1523]: E1002 19:32:32.047892 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:33.048762 kubelet[1523]: E1002 19:32:33.048690 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:34.049285 kubelet[1523]: E1002 19:32:34.049219 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:35.031748 kubelet[1523]: E1002 19:32:35.031564 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:35.050247 kubelet[1523]: E1002 19:32:35.050181 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:36.051183 kubelet[1523]: E1002 19:32:36.051054 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:37.052388 kubelet[1523]: E1002 19:32:37.052316 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:38.053353 kubelet[1523]: E1002 19:32:38.053263 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:39.053508 kubelet[1523]: E1002 19:32:39.053430 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:39.148953 env[1136]: time="2023-10-02T19:32:39.148707065Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:2,}" Oct 2 19:32:39.168389 env[1136]: time="2023-10-02T19:32:39.168322476Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:2,} returns container id \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" Oct 2 19:32:39.169188 env[1136]: time="2023-10-02T19:32:39.169034147Z" level=info msg="StartContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" Oct 2 19:32:39.201801 systemd[1]: Started cri-containerd-3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed.scope. Oct 2 19:32:39.215344 systemd[1]: cri-containerd-3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed.scope: Deactivated successfully. Oct 2 19:32:39.221557 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed-rootfs.mount: Deactivated successfully. Oct 2 19:32:39.238840 env[1136]: time="2023-10-02T19:32:39.238733766Z" level=info msg="shim disconnected" id=3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed Oct 2 19:32:39.238840 env[1136]: time="2023-10-02T19:32:39.238833204Z" level=warning msg="cleaning up after shim disconnected" id=3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed namespace=k8s.io Oct 2 19:32:39.239230 env[1136]: time="2023-10-02T19:32:39.238883833Z" level=info msg="cleaning up dead shim" Oct 2 19:32:39.251951 env[1136]: time="2023-10-02T19:32:39.251882827Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:32:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2495 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:32:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:32:39.252571 env[1136]: time="2023-10-02T19:32:39.252486439Z" level=error msg="copy shim log" error="read /proc/self/fd/56: file already closed" Oct 2 19:32:39.255977 env[1136]: time="2023-10-02T19:32:39.255898503Z" level=error msg="Failed to pipe stdout of container \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" error="reading from a closed fifo" Oct 2 19:32:39.256982 env[1136]: time="2023-10-02T19:32:39.256920932Z" level=error msg="Failed to pipe stderr of container \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" error="reading from a closed fifo" Oct 2 19:32:39.259732 env[1136]: time="2023-10-02T19:32:39.259659517Z" level=error msg="StartContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:32:39.260039 kubelet[1523]: E1002 19:32:39.259985 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed" Oct 2 19:32:39.260213 kubelet[1523]: E1002 19:32:39.260127 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:32:39.260213 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:32:39.260213 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:32:39.260213 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ttd5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:32:39.260497 kubelet[1523]: E1002 19:32:39.260183 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:39.653558 kubelet[1523]: I1002 19:32:39.653518 1523 scope.go:115] "RemoveContainer" containerID="46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17" Oct 2 19:32:39.654076 kubelet[1523]: I1002 19:32:39.654027 1523 scope.go:115] "RemoveContainer" containerID="46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17" Oct 2 19:32:39.655746 env[1136]: time="2023-10-02T19:32:39.655683483Z" level=info msg="RemoveContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" Oct 2 19:32:39.656109 env[1136]: time="2023-10-02T19:32:39.655991321Z" level=info msg="RemoveContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\"" Oct 2 19:32:39.656628 env[1136]: time="2023-10-02T19:32:39.656552943Z" level=error msg="RemoveContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\" failed" error="failed to set removing state for container \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\": container is already in removing state" Oct 2 19:32:39.656874 kubelet[1523]: E1002 19:32:39.656771 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\": container is already in removing state" containerID="46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17" Oct 2 19:32:39.656874 kubelet[1523]: I1002 19:32:39.656844 1523 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17} err="rpc error: code = Unknown desc = failed to set removing state for container \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\": container is already in removing state" Oct 2 19:32:39.663153 env[1136]: time="2023-10-02T19:32:39.663062821Z" level=info msg="RemoveContainer for \"46e7e1dceb6b1f909b7e2cb5a73c75876fc0daf01677c58c29c947e19ddd2a17\" returns successfully" Oct 2 19:32:39.663958 kubelet[1523]: E1002 19:32:39.663913 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:40.033111 kubelet[1523]: E1002 19:32:40.032961 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:40.054379 kubelet[1523]: E1002 19:32:40.054308 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:41.054582 kubelet[1523]: E1002 19:32:41.054481 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.054842 kubelet[1523]: E1002 19:32:42.054762 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:42.344480 kubelet[1523]: W1002 19:32:42.344417 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice/cri-containerd-3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed.scope WatchSource:0}: task 3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed not found: not found Oct 2 19:32:43.055062 kubelet[1523]: E1002 19:32:43.054989 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.055633 kubelet[1523]: E1002 19:32:44.055556 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.849330 kubelet[1523]: E1002 19:32:44.849257 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:44.873065 env[1136]: time="2023-10-02T19:32:44.872998762Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:44.873565 env[1136]: time="2023-10-02T19:32:44.873121432Z" level=info msg="TearDown network for sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" successfully" Oct 2 19:32:44.873565 env[1136]: time="2023-10-02T19:32:44.873170606Z" level=info msg="StopPodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" returns successfully" Oct 2 19:32:44.874120 env[1136]: time="2023-10-02T19:32:44.874076206Z" level=info msg="RemovePodSandbox for \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:44.874287 env[1136]: time="2023-10-02T19:32:44.874121763Z" level=info msg="Forcibly stopping sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\"" Oct 2 19:32:44.874287 env[1136]: time="2023-10-02T19:32:44.874226703Z" level=info msg="TearDown network for sandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" successfully" Oct 2 19:32:44.878964 env[1136]: time="2023-10-02T19:32:44.878909876Z" level=info msg="RemovePodSandbox \"135335c01dd8b946051fbe47692df3242599aede0555fc5606990bf83e3ca4ae\" returns successfully" Oct 2 19:32:44.879678 env[1136]: time="2023-10-02T19:32:44.879626239Z" level=info msg="StopPodSandbox for \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\"" Oct 2 19:32:44.880235 env[1136]: time="2023-10-02T19:32:44.880150854Z" level=info msg="TearDown network for sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" successfully" Oct 2 19:32:44.880235 env[1136]: time="2023-10-02T19:32:44.880226095Z" level=info msg="StopPodSandbox for \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" returns successfully" Oct 2 19:32:44.880671 env[1136]: time="2023-10-02T19:32:44.880628614Z" level=info msg="RemovePodSandbox for \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\"" Oct 2 19:32:44.880787 env[1136]: time="2023-10-02T19:32:44.880708819Z" level=info msg="Forcibly stopping sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\"" Oct 2 19:32:44.880874 env[1136]: time="2023-10-02T19:32:44.880852378Z" level=info msg="TearDown network for sandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" successfully" Oct 2 19:32:44.885044 env[1136]: time="2023-10-02T19:32:44.884997177Z" level=info msg="RemovePodSandbox \"4d304ad0e45f6a19ba06486c3b5ad3a309129244ac3dc25a33a3524fbe919097\" returns successfully" Oct 2 19:32:45.034569 kubelet[1523]: E1002 19:32:45.034504 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:45.056309 kubelet[1523]: E1002 19:32:45.056246 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:46.056778 kubelet[1523]: E1002 19:32:46.056712 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:47.057865 kubelet[1523]: E1002 19:32:47.057792 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:48.058445 kubelet[1523]: E1002 19:32:48.058371 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:49.059649 kubelet[1523]: E1002 19:32:49.059564 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:50.035592 kubelet[1523]: E1002 19:32:50.035534 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:50.060084 kubelet[1523]: E1002 19:32:50.060017 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:51.061297 kubelet[1523]: E1002 19:32:51.061207 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:52.061619 kubelet[1523]: E1002 19:32:52.061548 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.061733 kubelet[1523]: E1002 19:32:53.061660 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:53.145230 kubelet[1523]: E1002 19:32:53.145185 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 20s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:32:54.062555 kubelet[1523]: E1002 19:32:54.062481 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:55.037389 kubelet[1523]: E1002 19:32:55.037337 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:32:55.062719 kubelet[1523]: E1002 19:32:55.062658 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:56.063706 kubelet[1523]: E1002 19:32:56.063503 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:57.064600 kubelet[1523]: E1002 19:32:57.064529 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:58.064833 kubelet[1523]: E1002 19:32:58.064740 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:32:59.065707 kubelet[1523]: E1002 19:32:59.065633 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:00.038792 kubelet[1523]: E1002 19:33:00.038751 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:00.066132 kubelet[1523]: E1002 19:33:00.066066 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:01.066665 kubelet[1523]: E1002 19:33:01.066589 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:02.067106 kubelet[1523]: E1002 19:33:02.067042 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:03.067843 kubelet[1523]: E1002 19:33:03.067745 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.068882 kubelet[1523]: E1002 19:33:04.068827 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:04.849935 kubelet[1523]: E1002 19:33:04.849867 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:05.039732 kubelet[1523]: E1002 19:33:05.039687 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:05.069973 kubelet[1523]: E1002 19:33:05.069794 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.071044 kubelet[1523]: E1002 19:33:06.070986 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:06.148028 env[1136]: time="2023-10-02T19:33:06.147963301Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:3,}" Oct 2 19:33:06.166478 env[1136]: time="2023-10-02T19:33:06.166391124Z" level=info msg="CreateContainer within sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" for &ContainerMetadata{Name:mount-cgroup,Attempt:3,} returns container id \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\"" Oct 2 19:33:06.167660 env[1136]: time="2023-10-02T19:33:06.167583243Z" level=info msg="StartContainer for \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\"" Oct 2 19:33:06.200015 systemd[1]: Started cri-containerd-335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902.scope. Oct 2 19:33:06.214292 systemd[1]: cri-containerd-335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902.scope: Deactivated successfully. Oct 2 19:33:06.221081 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902-rootfs.mount: Deactivated successfully. Oct 2 19:33:06.230359 env[1136]: time="2023-10-02T19:33:06.230284236Z" level=info msg="shim disconnected" id=335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902 Oct 2 19:33:06.230359 env[1136]: time="2023-10-02T19:33:06.230362770Z" level=warning msg="cleaning up after shim disconnected" id=335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902 namespace=k8s.io Oct 2 19:33:06.230359 env[1136]: time="2023-10-02T19:33:06.230377369Z" level=info msg="cleaning up dead shim" Oct 2 19:33:06.243223 env[1136]: time="2023-10-02T19:33:06.243128789Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:06Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2540 runtime=io.containerd.runc.v2\ntime=\"2023-10-02T19:33:06Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Oct 2 19:33:06.243585 env[1136]: time="2023-10-02T19:33:06.243503134Z" level=error msg="copy shim log" error="read /proc/self/fd/51: file already closed" Oct 2 19:33:06.247062 env[1136]: time="2023-10-02T19:33:06.246990833Z" level=error msg="Failed to pipe stdout of container \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\"" error="reading from a closed fifo" Oct 2 19:33:06.247062 env[1136]: time="2023-10-02T19:33:06.246990835Z" level=error msg="Failed to pipe stderr of container \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\"" error="reading from a closed fifo" Oct 2 19:33:06.249760 env[1136]: time="2023-10-02T19:33:06.249686204Z" level=error msg="StartContainer for \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Oct 2 19:33:06.250110 kubelet[1523]: E1002 19:33:06.250055 1523 remote_runtime.go:474] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902" Oct 2 19:33:06.250287 kubelet[1523]: E1002 19:33:06.250198 1523 kuberuntime_manager.go:862] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Oct 2 19:33:06.250287 kubelet[1523]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Oct 2 19:33:06.250287 kubelet[1523]: rm /hostbin/cilium-mount Oct 2 19:33:06.250287 kubelet[1523]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-ttd5d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,} start failed in pod cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Oct 2 19:33:06.250568 kubelet[1523]: E1002 19:33:06.250255 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:33:06.711151 kubelet[1523]: I1002 19:33:06.711117 1523 scope.go:115] "RemoveContainer" containerID="3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed" Oct 2 19:33:06.711783 kubelet[1523]: I1002 19:33:06.711737 1523 scope.go:115] "RemoveContainer" containerID="3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed" Oct 2 19:33:06.713048 env[1136]: time="2023-10-02T19:33:06.712911400Z" level=info msg="RemoveContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" Oct 2 19:33:06.714177 env[1136]: time="2023-10-02T19:33:06.714114638Z" level=info msg="RemoveContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\"" Oct 2 19:33:06.714333 env[1136]: time="2023-10-02T19:33:06.714284222Z" level=error msg="RemoveContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\" failed" error="failed to set removing state for container \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\": container is already in removing state" Oct 2 19:33:06.714624 kubelet[1523]: E1002 19:33:06.714597 1523 remote_runtime.go:531] "RemoveContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to set removing state for container \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\": container is already in removing state" containerID="3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed" Oct 2 19:33:06.714742 kubelet[1523]: E1002 19:33:06.714666 1523 kuberuntime_container.go:777] failed to remove pod init container "mount-cgroup": rpc error: code = Unknown desc = failed to set removing state for container "3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed": container is already in removing state; Skipping pod "cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)" Oct 2 19:33:06.715183 kubelet[1523]: E1002 19:33:06.715158 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:33:06.717851 env[1136]: time="2023-10-02T19:33:06.717762449Z" level=info msg="RemoveContainer for \"3919cca04477e2b77b62b30ea9e0a1a7a19d508fd7ef7fbe52b0289e87ececed\" returns successfully" Oct 2 19:33:07.072064 kubelet[1523]: E1002 19:33:07.071995 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:08.073123 kubelet[1523]: E1002 19:33:08.073047 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.073936 kubelet[1523]: E1002 19:33:09.073850 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:09.336719 kubelet[1523]: W1002 19:33:09.336558 1523 manager.go:1174] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice/cri-containerd-335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902.scope WatchSource:0}: task 335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902 not found: not found Oct 2 19:33:10.041271 kubelet[1523]: E1002 19:33:10.041213 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:10.074797 kubelet[1523]: E1002 19:33:10.074714 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:11.076018 kubelet[1523]: E1002 19:33:11.075940 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:12.077210 kubelet[1523]: E1002 19:33:12.077141 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:13.077691 kubelet[1523]: E1002 19:33:13.077620 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:14.078834 kubelet[1523]: E1002 19:33:14.078712 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:15.042417 kubelet[1523]: E1002 19:33:15.042386 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:15.078997 kubelet[1523]: E1002 19:33:15.078934 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:16.079571 kubelet[1523]: E1002 19:33:16.079491 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:17.080192 kubelet[1523]: E1002 19:33:17.080119 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:18.081141 kubelet[1523]: E1002 19:33:18.081066 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.081869 kubelet[1523]: E1002 19:33:19.081783 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:19.145838 kubelet[1523]: E1002 19:33:19.145270 1523 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with CrashLoopBackOff: \"back-off 40s restarting failed container=mount-cgroup pod=cilium-pq8vz_kube-system(8a6f1f0b-4874-4671-832f-7b2fb2379d26)\"" pod="kube-system/cilium-pq8vz" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 Oct 2 19:33:20.044202 kubelet[1523]: E1002 19:33:20.044164 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:20.083019 kubelet[1523]: E1002 19:33:20.082946 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.083254 kubelet[1523]: E1002 19:33:21.083171 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:21.212095 env[1136]: time="2023-10-02T19:33:21.212038269Z" level=info msg="StopPodSandbox for \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\"" Oct 2 19:33:21.216047 env[1136]: time="2023-10-02T19:33:21.212124060Z" level=info msg="Container to stop \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:33:21.214654 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5-shm.mount: Deactivated successfully. Oct 2 19:33:21.226766 systemd[1]: cri-containerd-2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5.scope: Deactivated successfully. Oct 2 19:33:21.226000 audit: BPF prog-id=92 op=UNLOAD Oct 2 19:33:21.232674 kernel: kauditd_printk_skb: 164 callbacks suppressed Oct 2 19:33:21.232975 kernel: audit: type=1334 audit(1696275201.226:806): prog-id=92 op=UNLOAD Oct 2 19:33:21.240000 audit: BPF prog-id=95 op=UNLOAD Oct 2 19:33:21.249973 kernel: audit: type=1334 audit(1696275201.240:807): prog-id=95 op=UNLOAD Oct 2 19:33:21.266114 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5-rootfs.mount: Deactivated successfully. Oct 2 19:33:21.269798 env[1136]: time="2023-10-02T19:33:21.269736197Z" level=info msg="StopContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" with timeout 30 (s)" Oct 2 19:33:21.270393 env[1136]: time="2023-10-02T19:33:21.270348302Z" level=info msg="Stop container \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" with signal terminated" Oct 2 19:33:21.283692 env[1136]: time="2023-10-02T19:33:21.283618051Z" level=info msg="shim disconnected" id=2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5 Oct 2 19:33:21.284092 env[1136]: time="2023-10-02T19:33:21.284034833Z" level=warning msg="cleaning up after shim disconnected" id=2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5 namespace=k8s.io Oct 2 19:33:21.284092 env[1136]: time="2023-10-02T19:33:21.284063964Z" level=info msg="cleaning up dead shim" Oct 2 19:33:21.297058 kernel: audit: type=1334 audit(1696275201.288:808): prog-id=96 op=UNLOAD Oct 2 19:33:21.288000 audit: BPF prog-id=96 op=UNLOAD Oct 2 19:33:21.288551 systemd[1]: cri-containerd-37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34.scope: Deactivated successfully. Oct 2 19:33:21.298000 audit: BPF prog-id=99 op=UNLOAD Oct 2 19:33:21.306666 env[1136]: time="2023-10-02T19:33:21.306264425Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2580 runtime=io.containerd.runc.v2\n" Oct 2 19:33:21.306786 env[1136]: time="2023-10-02T19:33:21.306695750Z" level=info msg="TearDown network for sandbox \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" successfully" Oct 2 19:33:21.306786 env[1136]: time="2023-10-02T19:33:21.306731973Z" level=info msg="StopPodSandbox for \"2974155af08be9671baa79869d2b26ce812710336f5bc8268639d83ab18294d5\" returns successfully" Oct 2 19:33:21.306944 kernel: audit: type=1334 audit(1696275201.298:809): prog-id=99 op=UNLOAD Oct 2 19:33:21.330166 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34-rootfs.mount: Deactivated successfully. Oct 2 19:33:21.341145 env[1136]: time="2023-10-02T19:33:21.340967138Z" level=info msg="shim disconnected" id=37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34 Oct 2 19:33:21.341145 env[1136]: time="2023-10-02T19:33:21.341037141Z" level=warning msg="cleaning up after shim disconnected" id=37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34 namespace=k8s.io Oct 2 19:33:21.341145 env[1136]: time="2023-10-02T19:33:21.341052638Z" level=info msg="cleaning up dead shim" Oct 2 19:33:21.354693 env[1136]: time="2023-10-02T19:33:21.354622664Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2606 runtime=io.containerd.runc.v2\n" Oct 2 19:33:21.357614 env[1136]: time="2023-10-02T19:33:21.357551218Z" level=info msg="StopContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" returns successfully" Oct 2 19:33:21.358545 env[1136]: time="2023-10-02T19:33:21.358500323Z" level=info msg="StopPodSandbox for \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\"" Oct 2 19:33:21.364490 env[1136]: time="2023-10-02T19:33:21.358605121Z" level=info msg="Container to stop \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Oct 2 19:33:21.360905 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c-shm.mount: Deactivated successfully. Oct 2 19:33:21.371000 audit: BPF prog-id=88 op=UNLOAD Oct 2 19:33:21.372503 systemd[1]: cri-containerd-aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c.scope: Deactivated successfully. Oct 2 19:33:21.380837 kernel: audit: type=1334 audit(1696275201.371:810): prog-id=88 op=UNLOAD Oct 2 19:33:21.382000 audit: BPF prog-id=91 op=UNLOAD Oct 2 19:33:21.390843 kernel: audit: type=1334 audit(1696275201.382:811): prog-id=91 op=UNLOAD Oct 2 19:33:21.409521 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c-rootfs.mount: Deactivated successfully. Oct 2 19:33:21.417967 env[1136]: time="2023-10-02T19:33:21.417902130Z" level=info msg="shim disconnected" id=aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c Oct 2 19:33:21.417967 env[1136]: time="2023-10-02T19:33:21.417970767Z" level=warning msg="cleaning up after shim disconnected" id=aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c namespace=k8s.io Oct 2 19:33:21.418346 env[1136]: time="2023-10-02T19:33:21.417985560Z" level=info msg="cleaning up dead shim" Oct 2 19:33:21.431772 env[1136]: time="2023-10-02T19:33:21.431686460Z" level=warning msg="cleanup warnings time=\"2023-10-02T19:33:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2640 runtime=io.containerd.runc.v2\n" Oct 2 19:33:21.432273 env[1136]: time="2023-10-02T19:33:21.432225732Z" level=info msg="TearDown network for sandbox \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\" successfully" Oct 2 19:33:21.432394 env[1136]: time="2023-10-02T19:33:21.432273721Z" level=info msg="StopPodSandbox for \"aa0b21567d749372f4227f6ff9f2eae5116fbb9829049bf27f6b6d4640ba169c\" returns successfully" Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465257 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465328 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-config-path\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465365 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-clustermesh-secrets\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465401 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cni-path\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465431 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-net\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466119 kubelet[1523]: I1002 19:33:21.465466 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-etc-cni-netd\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465498 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hubble-tls\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465526 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-lib-modules\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465558 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hostproc\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465587 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-xtables-lock\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465616 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-bpf-maps\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.466677 kubelet[1523]: I1002 19:33:21.465650 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-kernel\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.467033 kubelet[1523]: I1002 19:33:21.465686 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttd5d\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-kube-api-access-ttd5d\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.467033 kubelet[1523]: I1002 19:33:21.465718 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-run\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.467033 kubelet[1523]: I1002 19:33:21.465749 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-cgroup\") pod \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\" (UID: \"8a6f1f0b-4874-4671-832f-7b2fb2379d26\") " Oct 2 19:33:21.467033 kubelet[1523]: I1002 19:33:21.465801 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.467033 kubelet[1523]: W1002 19:33:21.466028 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/8a6f1f0b-4874-4671-832f-7b2fb2379d26/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:33:21.468424 kubelet[1523]: I1002 19:33:21.467452 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.468947 kubelet[1523]: I1002 19:33:21.468905 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:33:21.469096 kubelet[1523]: I1002 19:33:21.468971 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hostproc" (OuterVolumeSpecName: "hostproc") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469096 kubelet[1523]: I1002 19:33:21.469001 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469096 kubelet[1523]: I1002 19:33:21.469028 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469096 kubelet[1523]: I1002 19:33:21.469051 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469366 kubelet[1523]: I1002 19:33:21.469331 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469642 kubelet[1523]: I1002 19:33:21.469602 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.469906 kubelet[1523]: I1002 19:33:21.469881 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.470447 kubelet[1523]: I1002 19:33:21.469801 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cni-path" (OuterVolumeSpecName: "cni-path") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Oct 2 19:33:21.475597 kubelet[1523]: I1002 19:33:21.475534 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:33:21.476043 kubelet[1523]: I1002 19:33:21.476010 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Oct 2 19:33:21.478015 kubelet[1523]: I1002 19:33:21.477978 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:33:21.479361 kubelet[1523]: I1002 19:33:21.479322 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-kube-api-access-ttd5d" (OuterVolumeSpecName: "kube-api-access-ttd5d") pod "8a6f1f0b-4874-4671-832f-7b2fb2379d26" (UID: "8a6f1f0b-4874-4671-832f-7b2fb2379d26"). InnerVolumeSpecName "kube-api-access-ttd5d". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:33:21.566862 kubelet[1523]: I1002 19:33:21.566782 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bd628\" (UniqueName: \"kubernetes.io/projected/5025e48c-ccfc-46e1-aefd-6f641169e4e7-kube-api-access-bd628\") pod \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\" (UID: \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\") " Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566898 1523 reconciler.go:211] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5025e48c-ccfc-46e1-aefd-6f641169e4e7-cilium-config-path\") pod \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\" (UID: \"5025e48c-ccfc-46e1-aefd-6f641169e4e7\") " Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566936 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-net\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566954 1523 reconciler.go:399] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-etc-cni-netd\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566969 1523 reconciler.go:399] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hubble-tls\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566985 1523 reconciler.go:399] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-host-proc-sys-kernel\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.566999 1523 reconciler.go:399] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-lib-modules\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.567015 1523 reconciler.go:399] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-hostproc\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567096 kubelet[1523]: I1002 19:33:21.567031 1523 reconciler.go:399] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-xtables-lock\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567046 1523 reconciler.go:399] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-bpf-maps\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567065 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-ttd5d\" (UniqueName: \"kubernetes.io/projected/8a6f1f0b-4874-4671-832f-7b2fb2379d26-kube-api-access-ttd5d\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567080 1523 reconciler.go:399] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-run\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567094 1523 reconciler.go:399] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-cgroup\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567113 1523 reconciler.go:399] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cni-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567133 1523 reconciler.go:399] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-ipsec-secrets\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567154 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/8a6f1f0b-4874-4671-832f-7b2fb2379d26-cilium-config-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.567556 kubelet[1523]: I1002 19:33:21.567171 1523 reconciler.go:399] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/8a6f1f0b-4874-4671-832f-7b2fb2379d26-clustermesh-secrets\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.568037 kubelet[1523]: W1002 19:33:21.567412 1523 empty_dir.go:523] Warning: Failed to clear quota on /var/lib/kubelet/pods/5025e48c-ccfc-46e1-aefd-6f641169e4e7/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Oct 2 19:33:21.570397 kubelet[1523]: I1002 19:33:21.570354 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5025e48c-ccfc-46e1-aefd-6f641169e4e7-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5025e48c-ccfc-46e1-aefd-6f641169e4e7" (UID: "5025e48c-ccfc-46e1-aefd-6f641169e4e7"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Oct 2 19:33:21.572711 kubelet[1523]: I1002 19:33:21.572655 1523 operation_generator.go:890] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5025e48c-ccfc-46e1-aefd-6f641169e4e7-kube-api-access-bd628" (OuterVolumeSpecName: "kube-api-access-bd628") pod "5025e48c-ccfc-46e1-aefd-6f641169e4e7" (UID: "5025e48c-ccfc-46e1-aefd-6f641169e4e7"). InnerVolumeSpecName "kube-api-access-bd628". PluginName "kubernetes.io/projected", VolumeGidValue "" Oct 2 19:33:21.668128 kubelet[1523]: I1002 19:33:21.667968 1523 reconciler.go:399] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5025e48c-ccfc-46e1-aefd-6f641169e4e7-cilium-config-path\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.668128 kubelet[1523]: I1002 19:33:21.668017 1523 reconciler.go:399] "Volume detached for volume \"kube-api-access-bd628\" (UniqueName: \"kubernetes.io/projected/5025e48c-ccfc-46e1-aefd-6f641169e4e7-kube-api-access-bd628\") on node \"10.128.0.55\" DevicePath \"\"" Oct 2 19:33:21.745690 kubelet[1523]: I1002 19:33:21.745499 1523 scope.go:115] "RemoveContainer" containerID="335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902" Oct 2 19:33:21.747293 systemd[1]: Removed slice kubepods-burstable-pod8a6f1f0b_4874_4671_832f_7b2fb2379d26.slice. Oct 2 19:33:21.749776 env[1136]: time="2023-10-02T19:33:21.749733521Z" level=info msg="RemoveContainer for \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\"" Oct 2 19:33:21.755231 env[1136]: time="2023-10-02T19:33:21.755166800Z" level=info msg="RemoveContainer for \"335498080e589c02861e3d63b130a9e8c21d6ae867cdafffb615a9de9bbe3902\" returns successfully" Oct 2 19:33:21.755994 kubelet[1523]: I1002 19:33:21.755961 1523 scope.go:115] "RemoveContainer" containerID="37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34" Oct 2 19:33:21.758674 systemd[1]: Removed slice kubepods-besteffort-pod5025e48c_ccfc_46e1_aefd_6f641169e4e7.slice. Oct 2 19:33:21.760233 env[1136]: time="2023-10-02T19:33:21.760195523Z" level=info msg="RemoveContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\"" Oct 2 19:33:21.764860 env[1136]: time="2023-10-02T19:33:21.764798154Z" level=info msg="RemoveContainer for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" returns successfully" Oct 2 19:33:21.765276 kubelet[1523]: I1002 19:33:21.765215 1523 scope.go:115] "RemoveContainer" containerID="37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34" Oct 2 19:33:21.765623 env[1136]: time="2023-10-02T19:33:21.765523850Z" level=error msg="ContainerStatus for \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\": not found" Oct 2 19:33:21.765807 kubelet[1523]: E1002 19:33:21.765784 1523 remote_runtime.go:625] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\": not found" containerID="37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34" Oct 2 19:33:21.765807 kubelet[1523]: I1002 19:33:21.765875 1523 pod_container_deletor.go:52] "DeleteContainer returned error" containerID={Type:containerd ID:37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34} err="failed to get container status \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\": rpc error: code = NotFound desc = an error occurred when try to find container \"37a20ac1ad91da37d7fe3e2d660aa0a590463508eb13105eb74b9e07f9111a34\": not found" Oct 2 19:33:22.083689 kubelet[1523]: E1002 19:33:22.083595 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:22.214665 systemd[1]: var-lib-kubelet-pods-8a6f1f0b\x2d4874\x2d4671\x2d832f\x2d7b2fb2379d26-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Oct 2 19:33:22.214860 systemd[1]: var-lib-kubelet-pods-8a6f1f0b\x2d4874\x2d4671\x2d832f\x2d7b2fb2379d26-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Oct 2 19:33:22.214970 systemd[1]: var-lib-kubelet-pods-8a6f1f0b\x2d4874\x2d4671\x2d832f\x2d7b2fb2379d26-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Oct 2 19:33:22.215065 systemd[1]: var-lib-kubelet-pods-5025e48c\x2dccfc\x2d46e1\x2daefd\x2d6f641169e4e7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbd628.mount: Deactivated successfully. Oct 2 19:33:22.215160 systemd[1]: var-lib-kubelet-pods-8a6f1f0b\x2d4874\x2d4671\x2d832f\x2d7b2fb2379d26-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dttd5d.mount: Deactivated successfully. Oct 2 19:33:23.084742 kubelet[1523]: E1002 19:33:23.084666 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:23.147614 kubelet[1523]: I1002 19:33:23.147558 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5025e48c-ccfc-46e1-aefd-6f641169e4e7 path="/var/lib/kubelet/pods/5025e48c-ccfc-46e1-aefd-6f641169e4e7/volumes" Oct 2 19:33:23.148184 kubelet[1523]: I1002 19:33:23.148141 1523 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=8a6f1f0b-4874-4671-832f-7b2fb2379d26 path="/var/lib/kubelet/pods/8a6f1f0b-4874-4671-832f-7b2fb2379d26/volumes" Oct 2 19:33:24.085651 kubelet[1523]: E1002 19:33:24.085574 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:24.849832 kubelet[1523]: E1002 19:33:24.849731 1523 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Oct 2 19:33:25.045335 kubelet[1523]: E1002 19:33:25.045292 1523 kubelet.go:2373] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Oct 2 19:33:25.085930 kubelet[1523]: E1002 19:33:25.085856 1523 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests"