Jul 2 07:55:35.138844 kernel: Linux version 5.15.161-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Mon Jul 1 23:45:21 -00 2024 Jul 2 07:55:35.139033 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:55:35.139053 kernel: BIOS-provided physical RAM map: Jul 2 07:55:35.139068 kernel: BIOS-e820: [mem 0x0000000000000000-0x0000000000000fff] reserved Jul 2 07:55:35.139088 kernel: BIOS-e820: [mem 0x0000000000001000-0x0000000000054fff] usable Jul 2 07:55:35.139102 kernel: BIOS-e820: [mem 0x0000000000055000-0x000000000005ffff] reserved Jul 2 07:55:35.139123 kernel: BIOS-e820: [mem 0x0000000000060000-0x0000000000097fff] usable Jul 2 07:55:35.139246 kernel: BIOS-e820: [mem 0x0000000000098000-0x000000000009ffff] reserved Jul 2 07:55:35.139260 kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bf8ecfff] usable Jul 2 07:55:35.139274 kernel: BIOS-e820: [mem 0x00000000bf8ed000-0x00000000bfb6cfff] reserved Jul 2 07:55:35.139289 kernel: BIOS-e820: [mem 0x00000000bfb6d000-0x00000000bfb7efff] ACPI data Jul 2 07:55:35.139303 kernel: BIOS-e820: [mem 0x00000000bfb7f000-0x00000000bfbfefff] ACPI NVS Jul 2 07:55:35.139316 kernel: BIOS-e820: [mem 0x00000000bfbff000-0x00000000bffdffff] usable Jul 2 07:55:35.139330 kernel: BIOS-e820: [mem 0x00000000bffe0000-0x00000000bfffffff] reserved Jul 2 07:55:35.139352 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000021fffffff] usable Jul 2 07:55:35.139474 kernel: NX (Execute Disable) protection: active Jul 2 07:55:35.139490 kernel: efi: EFI v2.70 by EDK II Jul 2 07:55:35.139506 kernel: efi: TPMFinalLog=0xbfbf7000 ACPI=0xbfb7e000 ACPI 2.0=0xbfb7e014 SMBIOS=0xbf9e8000 RNG=0xbfb73018 TPMEventLog=0xbd2d2018 Jul 2 07:55:35.139522 kernel: random: crng init done Jul 2 07:55:35.139537 kernel: SMBIOS 2.4 present. Jul 2 07:55:35.139552 kernel: DMI: Google Google Compute Engine/Google Compute Engine, BIOS Google 04/02/2024 Jul 2 07:55:35.139568 kernel: Hypervisor detected: KVM Jul 2 07:55:35.139588 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Jul 2 07:55:35.139696 kernel: kvm-clock: cpu 0, msr 63192001, primary cpu clock Jul 2 07:55:35.139712 kernel: kvm-clock: using sched offset of 13177202021 cycles Jul 2 07:55:35.139729 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Jul 2 07:55:35.139744 kernel: tsc: Detected 2299.998 MHz processor Jul 2 07:55:35.139761 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:55:35.139777 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:55:35.139793 kernel: last_pfn = 0x220000 max_arch_pfn = 0x400000000 Jul 2 07:55:35.139809 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:55:35.139825 kernel: last_pfn = 0xbffe0 max_arch_pfn = 0x400000000 Jul 2 07:55:35.139844 kernel: Using GB pages for direct mapping Jul 2 07:55:35.139860 kernel: Secure boot disabled Jul 2 07:55:35.139876 kernel: ACPI: Early table checksum verification disabled Jul 2 07:55:35.139910 kernel: ACPI: RSDP 0x00000000BFB7E014 000024 (v02 Google) Jul 2 07:55:35.139926 kernel: ACPI: XSDT 0x00000000BFB7D0E8 00005C (v01 Google GOOGFACP 00000001 01000013) Jul 2 07:55:35.139940 kernel: ACPI: FACP 0x00000000BFB78000 0000F4 (v02 Google GOOGFACP 00000001 GOOG 00000001) Jul 2 07:55:35.139956 kernel: ACPI: DSDT 0x00000000BFB79000 001A64 (v01 Google GOOGDSDT 00000001 GOOG 00000001) Jul 2 07:55:35.139971 kernel: ACPI: FACS 0x00000000BFBF2000 000040 Jul 2 07:55:35.139997 kernel: ACPI: SSDT 0x00000000BFB7C000 000316 (v02 GOOGLE Tpm2Tabl 00001000 INTL 20211217) Jul 2 07:55:35.140014 kernel: ACPI: TPM2 0x00000000BFB7B000 000034 (v04 GOOGLE 00000001 GOOG 00000001) Jul 2 07:55:35.140030 kernel: ACPI: SRAT 0x00000000BFB77000 0000C8 (v03 Google GOOGSRAT 00000001 GOOG 00000001) Jul 2 07:55:35.140047 kernel: ACPI: APIC 0x00000000BFB76000 000076 (v05 Google GOOGAPIC 00000001 GOOG 00000001) Jul 2 07:55:35.140063 kernel: ACPI: SSDT 0x00000000BFB75000 000980 (v01 Google GOOGSSDT 00000001 GOOG 00000001) Jul 2 07:55:35.140086 kernel: ACPI: WAET 0x00000000BFB74000 000028 (v01 Google GOOGWAET 00000001 GOOG 00000001) Jul 2 07:55:35.140106 kernel: ACPI: Reserving FACP table memory at [mem 0xbfb78000-0xbfb780f3] Jul 2 07:55:35.140123 kernel: ACPI: Reserving DSDT table memory at [mem 0xbfb79000-0xbfb7aa63] Jul 2 07:55:35.140139 kernel: ACPI: Reserving FACS table memory at [mem 0xbfbf2000-0xbfbf203f] Jul 2 07:55:35.140155 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb7c000-0xbfb7c315] Jul 2 07:55:35.140172 kernel: ACPI: Reserving TPM2 table memory at [mem 0xbfb7b000-0xbfb7b033] Jul 2 07:55:35.140188 kernel: ACPI: Reserving SRAT table memory at [mem 0xbfb77000-0xbfb770c7] Jul 2 07:55:35.140205 kernel: ACPI: Reserving APIC table memory at [mem 0xbfb76000-0xbfb76075] Jul 2 07:55:35.140221 kernel: ACPI: Reserving SSDT table memory at [mem 0xbfb75000-0xbfb7597f] Jul 2 07:55:35.140238 kernel: ACPI: Reserving WAET table memory at [mem 0xbfb74000-0xbfb74027] Jul 2 07:55:35.140257 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:55:35.140273 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:55:35.140290 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x0009ffff] Jul 2 07:55:35.140306 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00100000-0xbfffffff] Jul 2 07:55:35.140323 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x21fffffff] Jul 2 07:55:35.140339 kernel: NUMA: Node 0 [mem 0x00000000-0x0009ffff] + [mem 0x00100000-0xbfffffff] -> [mem 0x00000000-0xbfffffff] Jul 2 07:55:35.140356 kernel: NUMA: Node 0 [mem 0x00000000-0xbfffffff] + [mem 0x100000000-0x21fffffff] -> [mem 0x00000000-0x21fffffff] Jul 2 07:55:35.140372 kernel: NODE_DATA(0) allocated [mem 0x21fff8000-0x21fffdfff] Jul 2 07:55:35.140389 kernel: Zone ranges: Jul 2 07:55:35.140408 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:55:35.140425 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:55:35.140441 kernel: Normal [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:55:35.140458 kernel: Movable zone start for each node Jul 2 07:55:35.140474 kernel: Early memory node ranges Jul 2 07:55:35.140490 kernel: node 0: [mem 0x0000000000001000-0x0000000000054fff] Jul 2 07:55:35.140507 kernel: node 0: [mem 0x0000000000060000-0x0000000000097fff] Jul 2 07:55:35.140523 kernel: node 0: [mem 0x0000000000100000-0x00000000bf8ecfff] Jul 2 07:55:35.140539 kernel: node 0: [mem 0x00000000bfbff000-0x00000000bffdffff] Jul 2 07:55:35.140559 kernel: node 0: [mem 0x0000000100000000-0x000000021fffffff] Jul 2 07:55:35.140575 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000021fffffff] Jul 2 07:55:35.140592 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:55:35.140608 kernel: On node 0, zone DMA: 11 pages in unavailable ranges Jul 2 07:55:35.140624 kernel: On node 0, zone DMA: 104 pages in unavailable ranges Jul 2 07:55:35.140640 kernel: On node 0, zone DMA32: 786 pages in unavailable ranges Jul 2 07:55:35.140657 kernel: On node 0, zone Normal: 32 pages in unavailable ranges Jul 2 07:55:35.140673 kernel: ACPI: PM-Timer IO Port: 0xb008 Jul 2 07:55:35.140689 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Jul 2 07:55:35.140709 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:55:35.140725 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Jul 2 07:55:35.140742 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:55:35.140758 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Jul 2 07:55:35.140775 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Jul 2 07:55:35.140791 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:55:35.140807 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:55:35.140823 kernel: [mem 0xc0000000-0xffffffff] available for PCI devices Jul 2 07:55:35.140839 kernel: Booting paravirtualized kernel on KVM Jul 2 07:55:35.140859 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:55:35.140875 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:55:35.140904 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 Jul 2 07:55:35.140920 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 Jul 2 07:55:35.140937 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:55:35.140953 kernel: kvm-guest: PV spinlocks enabled Jul 2 07:55:35.140969 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:55:35.140986 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1932280 Jul 2 07:55:35.141002 kernel: Policy zone: Normal Jul 2 07:55:35.141024 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:55:35.141041 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:55:35.141057 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:55:35.141079 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:55:35.141095 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:55:35.141112 kernel: Memory: 7516804K/7860584K available (12294K kernel code, 2276K rwdata, 13712K rodata, 47444K init, 4144K bss, 343520K reserved, 0K cma-reserved) Jul 2 07:55:35.141129 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:55:35.141145 kernel: Kernel/User page tables isolation: enabled Jul 2 07:55:35.141165 kernel: ftrace: allocating 34514 entries in 135 pages Jul 2 07:55:35.141181 kernel: ftrace: allocated 135 pages with 4 groups Jul 2 07:55:35.141198 kernel: rcu: Hierarchical RCU implementation. Jul 2 07:55:35.141216 kernel: rcu: RCU event tracing is enabled. Jul 2 07:55:35.141233 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:55:35.141250 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:55:35.141266 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:55:35.141282 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:55:35.141299 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:55:35.141319 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Jul 2 07:55:35.141348 kernel: Console: colour dummy device 80x25 Jul 2 07:55:35.141365 kernel: printk: console [ttyS0] enabled Jul 2 07:55:35.141386 kernel: ACPI: Core revision 20210730 Jul 2 07:55:35.141403 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:55:35.141420 kernel: x2apic enabled Jul 2 07:55:35.141437 kernel: Switched APIC routing to physical x2apic. Jul 2 07:55:35.141454 kernel: ..TIMER: vector=0x30 apic1=0 pin1=0 apic2=-1 pin2=-1 Jul 2 07:55:35.141472 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:55:35.141490 kernel: Calibrating delay loop (skipped) preset value.. 4599.99 BogoMIPS (lpj=2299998) Jul 2 07:55:35.141510 kernel: Last level iTLB entries: 4KB 1024, 2MB 1024, 4MB 1024 Jul 2 07:55:35.141528 kernel: Last level dTLB entries: 4KB 1024, 2MB 1024, 4MB 1024, 1GB 4 Jul 2 07:55:35.141545 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:55:35.141563 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on vm exit Jul 2 07:55:35.141580 kernel: Spectre V2 : Spectre BHI mitigation: SW BHB clearing on syscall Jul 2 07:55:35.141597 kernel: Spectre V2 : Mitigation: IBRS Jul 2 07:55:35.141618 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:55:35.141636 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:55:35.141653 kernel: RETBleed: Mitigation: IBRS Jul 2 07:55:35.141671 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Jul 2 07:55:35.141688 kernel: Spectre V2 : User space: Mitigation: STIBP via seccomp and prctl Jul 2 07:55:35.141703 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl and seccomp Jul 2 07:55:35.141720 kernel: MDS: Mitigation: Clear CPU buffers Jul 2 07:55:35.141738 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:55:35.141755 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:55:35.141776 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:55:35.141793 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:55:35.141811 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:55:35.141828 kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'standard' format. Jul 2 07:55:35.141845 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:55:35.141863 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:55:35.147340 kernel: LSM: Security Framework initializing Jul 2 07:55:35.147389 kernel: SELinux: Initializing. Jul 2 07:55:35.147409 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:55:35.147437 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:55:35.147455 kernel: smpboot: CPU0: Intel(R) Xeon(R) CPU @ 2.30GHz (family: 0x6, model: 0x3f, stepping: 0x0) Jul 2 07:55:35.147474 kernel: Performance Events: unsupported p6 CPU model 63 no PMU driver, software events only. Jul 2 07:55:35.147492 kernel: signal: max sigframe size: 1776 Jul 2 07:55:35.147511 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:55:35.147529 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:55:35.147547 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:55:35.147565 kernel: x86: Booting SMP configuration: Jul 2 07:55:35.147583 kernel: .... node #0, CPUs: #1 Jul 2 07:55:35.147605 kernel: kvm-clock: cpu 1, msr 63192041, secondary cpu clock Jul 2 07:55:35.147624 kernel: MDS CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/mds.html for more details. Jul 2 07:55:35.147644 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:55:35.147662 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:55:35.147680 kernel: smpboot: Max logical packages: 1 Jul 2 07:55:35.147697 kernel: smpboot: Total of 2 processors activated (9199.99 BogoMIPS) Jul 2 07:55:35.147715 kernel: devtmpfs: initialized Jul 2 07:55:35.147733 kernel: x86/mm: Memory block size: 128MB Jul 2 07:55:35.147751 kernel: ACPI: PM: Registering ACPI NVS region [mem 0xbfb7f000-0xbfbfefff] (524288 bytes) Jul 2 07:55:35.147772 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:55:35.147790 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:55:35.147808 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:55:35.147825 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:55:35.147843 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:55:35.147861 kernel: audit: type=2000 audit(1719906933.563:1): state=initialized audit_enabled=0 res=1 Jul 2 07:55:35.147878 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:55:35.147908 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:55:35.147926 kernel: cpuidle: using governor menu Jul 2 07:55:35.147947 kernel: ACPI: bus type PCI registered Jul 2 07:55:35.147965 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:55:35.147983 kernel: dca service started, version 1.12.1 Jul 2 07:55:35.148001 kernel: PCI: Using configuration type 1 for base access Jul 2 07:55:35.148019 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:55:35.148037 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:55:35.148056 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:55:35.148080 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:55:35.148098 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:55:35.148119 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:55:35.148137 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:55:35.148155 kernel: ACPI: Added _OSI(Linux-Dell-Video) Jul 2 07:55:35.148173 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Jul 2 07:55:35.148192 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Jul 2 07:55:35.148209 kernel: ACPI: 3 ACPI AML tables successfully acquired and loaded Jul 2 07:55:35.148227 kernel: ACPI: Interpreter enabled Jul 2 07:55:35.148243 kernel: ACPI: PM: (supports S0 S3 S5) Jul 2 07:55:35.148261 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:55:35.148283 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:55:35.148301 kernel: ACPI: Enabled 16 GPEs in block 00 to 0F Jul 2 07:55:35.148319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jul 2 07:55:35.148568 kernel: acpi PNP0A03:00: _OSC: OS supports [ASPM ClockPM Segments MSI HPX-Type3] Jul 2 07:55:35.148740 kernel: acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. Jul 2 07:55:35.148764 kernel: PCI host bridge to bus 0000:00 Jul 2 07:55:35.148948 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Jul 2 07:55:35.149118 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Jul 2 07:55:35.149270 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Jul 2 07:55:35.149416 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfefff window] Jul 2 07:55:35.149566 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jul 2 07:55:35.149749 kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 Jul 2 07:55:35.150202 kernel: pci 0000:00:01.0: [8086:7110] type 00 class 0x060100 Jul 2 07:55:35.150530 kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 Jul 2 07:55:35.150942 kernel: pci 0000:00:01.3: quirk: [io 0xb000-0xb03f] claimed by PIIX4 ACPI Jul 2 07:55:35.151248 kernel: pci 0000:00:03.0: [1af4:1004] type 00 class 0x000000 Jul 2 07:55:35.151423 kernel: pci 0000:00:03.0: reg 0x10: [io 0xc040-0xc07f] Jul 2 07:55:35.151595 kernel: pci 0000:00:03.0: reg 0x14: [mem 0xc0001000-0xc000107f] Jul 2 07:55:35.151779 kernel: pci 0000:00:04.0: [1af4:1000] type 00 class 0x020000 Jul 2 07:55:35.164978 kernel: pci 0000:00:04.0: reg 0x10: [io 0xc000-0xc03f] Jul 2 07:55:35.165207 kernel: pci 0000:00:04.0: reg 0x14: [mem 0xc0000000-0xc000007f] Jul 2 07:55:35.165389 kernel: pci 0000:00:05.0: [1af4:1005] type 00 class 0x00ff00 Jul 2 07:55:35.165561 kernel: pci 0000:00:05.0: reg 0x10: [io 0xc080-0xc09f] Jul 2 07:55:35.165730 kernel: pci 0000:00:05.0: reg 0x14: [mem 0xc0002000-0xc000203f] Jul 2 07:55:35.165754 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Jul 2 07:55:35.165773 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Jul 2 07:55:35.165791 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Jul 2 07:55:35.165813 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Jul 2 07:55:35.165830 kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9 Jul 2 07:55:35.165848 kernel: iommu: Default domain type: Translated Jul 2 07:55:35.165865 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:55:35.165896 kernel: vgaarb: loaded Jul 2 07:55:35.165915 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:55:35.165933 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:55:35.165950 kernel: PTP clock support registered Jul 2 07:55:35.165968 kernel: Registered efivars operations Jul 2 07:55:35.165990 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:55:35.166007 kernel: PCI: pci_cache_line_size set to 64 bytes Jul 2 07:55:35.166034 kernel: e820: reserve RAM buffer [mem 0x00055000-0x0005ffff] Jul 2 07:55:35.166052 kernel: e820: reserve RAM buffer [mem 0x00098000-0x0009ffff] Jul 2 07:55:35.166076 kernel: e820: reserve RAM buffer [mem 0xbf8ed000-0xbfffffff] Jul 2 07:55:35.166094 kernel: e820: reserve RAM buffer [mem 0xbffe0000-0xbfffffff] Jul 2 07:55:35.166111 kernel: clocksource: Switched to clocksource kvm-clock Jul 2 07:55:35.166129 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:55:35.166147 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:55:35.166168 kernel: pnp: PnP ACPI init Jul 2 07:55:35.166186 kernel: pnp: PnP ACPI: found 7 devices Jul 2 07:55:35.166205 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:55:35.166222 kernel: NET: Registered PF_INET protocol family Jul 2 07:55:35.166240 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:55:35.166258 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:55:35.166276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:55:35.166293 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:55:35.166311 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Jul 2 07:55:35.166333 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:55:35.166351 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:55:35.166369 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:55:35.166387 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:55:35.166405 kernel: NET: Registered PF_XDP protocol family Jul 2 07:55:35.166567 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Jul 2 07:55:35.166720 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Jul 2 07:55:35.166868 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Jul 2 07:55:35.167049 kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfefff window] Jul 2 07:55:35.167233 kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers Jul 2 07:55:35.167257 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:55:35.167275 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:55:35.167293 kernel: software IO TLB: mapped [mem 0x00000000b7ff7000-0x00000000bbff7000] (64MB) Jul 2 07:55:35.167310 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:55:35.167329 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x212733415c7, max_idle_ns: 440795236380 ns Jul 2 07:55:35.167346 kernel: clocksource: Switched to clocksource tsc Jul 2 07:55:35.167368 kernel: Initialise system trusted keyrings Jul 2 07:55:35.167385 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:55:35.167402 kernel: Key type asymmetric registered Jul 2 07:55:35.167419 kernel: Asymmetric key parser 'x509' registered Jul 2 07:55:35.167436 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:55:35.167454 kernel: io scheduler mq-deadline registered Jul 2 07:55:35.167471 kernel: io scheduler kyber registered Jul 2 07:55:35.167488 kernel: io scheduler bfq registered Jul 2 07:55:35.167505 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:55:35.167527 kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11 Jul 2 07:55:35.167689 kernel: virtio-pci 0000:00:03.0: virtio_pci: leaving for legacy driver Jul 2 07:55:35.167710 kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 10 Jul 2 07:55:35.167863 kernel: virtio-pci 0000:00:04.0: virtio_pci: leaving for legacy driver Jul 2 07:55:35.167895 kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10 Jul 2 07:55:35.168050 kernel: virtio-pci 0000:00:05.0: virtio_pci: leaving for legacy driver Jul 2 07:55:35.168076 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:55:35.168093 kernel: 00:03: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:55:35.168110 kernel: 00:04: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:55:35.168130 kernel: 00:05: ttyS2 at I/O 0x3e8 (irq = 6, base_baud = 115200) is a 16550A Jul 2 07:55:35.168147 kernel: 00:06: ttyS3 at I/O 0x2e8 (irq = 7, base_baud = 115200) is a 16550A Jul 2 07:55:35.168314 kernel: tpm_tis MSFT0101:00: 2.0 TPM (device-id 0x9009, rev-id 0) Jul 2 07:55:35.168336 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Jul 2 07:55:35.168353 kernel: i8042: Warning: Keylock active Jul 2 07:55:35.168369 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Jul 2 07:55:35.168386 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Jul 2 07:55:35.168537 kernel: rtc_cmos 00:00: RTC can wake from S4 Jul 2 07:55:35.168718 kernel: rtc_cmos 00:00: registered as rtc0 Jul 2 07:55:35.168871 kernel: rtc_cmos 00:00: setting system clock to 2024-07-02T07:55:34 UTC (1719906934) Jul 2 07:55:35.169735 kernel: rtc_cmos 00:00: alarms up to one day, 114 bytes nvram Jul 2 07:55:35.169765 kernel: intel_pstate: CPU model not supported Jul 2 07:55:35.169784 kernel: pstore: Registered efi as persistent store backend Jul 2 07:55:35.169803 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:55:35.169821 kernel: Segment Routing with IPv6 Jul 2 07:55:35.169838 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:55:35.169864 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:55:35.169895 kernel: Key type dns_resolver registered Jul 2 07:55:35.169913 kernel: IPI shorthand broadcast: enabled Jul 2 07:55:35.169936 kernel: sched_clock: Marking stable (734199859, 138956853)->(910039460, -36882748) Jul 2 07:55:35.169954 kernel: registered taskstats version 1 Jul 2 07:55:35.169972 kernel: Loading compiled-in X.509 certificates Jul 2 07:55:35.169991 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Jul 2 07:55:35.170009 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.161-flatcar: a1ce693884775675566f1ed116e36d15950b9a42' Jul 2 07:55:35.170026 kernel: Key type .fscrypt registered Jul 2 07:55:35.170047 kernel: Key type fscrypt-provisioning registered Jul 2 07:55:35.170066 kernel: pstore: Using crash dump compression: deflate Jul 2 07:55:35.170092 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:55:35.170110 kernel: ima: No architecture policies found Jul 2 07:55:35.170128 kernel: clk: Disabling unused clocks Jul 2 07:55:35.170146 kernel: Freeing unused kernel image (initmem) memory: 47444K Jul 2 07:55:35.170164 kernel: Write protecting the kernel read-only data: 28672k Jul 2 07:55:35.170182 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:55:35.170204 kernel: Freeing unused kernel image (rodata/data gap) memory: 624K Jul 2 07:55:35.170222 kernel: Run /init as init process Jul 2 07:55:35.170240 kernel: with arguments: Jul 2 07:55:35.170258 kernel: /init Jul 2 07:55:35.170276 kernel: with environment: Jul 2 07:55:35.170294 kernel: HOME=/ Jul 2 07:55:35.170311 kernel: TERM=linux Jul 2 07:55:35.170329 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:55:35.170351 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:55:35.170377 systemd[1]: Detected virtualization kvm. Jul 2 07:55:35.170396 systemd[1]: Detected architecture x86-64. Jul 2 07:55:35.170415 systemd[1]: Running in initrd. Jul 2 07:55:35.170433 systemd[1]: No hostname configured, using default hostname. Jul 2 07:55:35.170451 systemd[1]: Hostname set to . Jul 2 07:55:35.170471 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:55:35.170489 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:55:35.170512 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:55:35.170530 systemd[1]: Reached target cryptsetup.target. Jul 2 07:55:35.170549 systemd[1]: Reached target paths.target. Jul 2 07:55:35.170568 systemd[1]: Reached target slices.target. Jul 2 07:55:35.170586 systemd[1]: Reached target swap.target. Jul 2 07:55:35.170604 systemd[1]: Reached target timers.target. Jul 2 07:55:35.170624 systemd[1]: Listening on iscsid.socket. Jul 2 07:55:35.170642 systemd[1]: Listening on iscsiuio.socket. Jul 2 07:55:35.170665 systemd[1]: Listening on systemd-journald-audit.socket. Jul 2 07:55:35.170684 systemd[1]: Listening on systemd-journald-dev-log.socket. Jul 2 07:55:35.170703 systemd[1]: Listening on systemd-journald.socket. Jul 2 07:55:35.170722 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:55:35.170740 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:55:35.170759 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:55:35.170778 systemd[1]: Reached target sockets.target. Jul 2 07:55:35.170797 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:55:35.170819 systemd[1]: Finished network-cleanup.service. Jul 2 07:55:35.170838 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:55:35.170857 systemd[1]: Starting systemd-journald.service... Jul 2 07:55:35.176530 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:55:35.176567 systemd[1]: Starting systemd-resolved.service... Jul 2 07:55:35.176586 systemd[1]: Starting systemd-vconsole-setup.service... Jul 2 07:55:35.176604 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:55:35.176626 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:55:35.176644 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Jul 2 07:55:35.176662 systemd[1]: Finished systemd-vconsole-setup.service. Jul 2 07:55:35.176682 kernel: audit: type=1130 audit(1719906935.140:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.176703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Jul 2 07:55:35.176721 kernel: audit: type=1130 audit(1719906935.150:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.176739 systemd[1]: Starting dracut-cmdline-ask.service... Jul 2 07:55:35.176763 systemd-journald[189]: Journal started Jul 2 07:55:35.176863 systemd-journald[189]: Runtime Journal (/run/log/journal/70f1e54b7edfc1712eba4424830c70b5) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:55:35.183196 systemd[1]: Started systemd-journald.service. Jul 2 07:55:35.183265 kernel: audit: type=1130 audit(1719906935.176:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.158292 systemd-modules-load[190]: Inserted module 'overlay' Jul 2 07:55:35.208414 systemd-resolved[191]: Positive Trust Anchors: Jul 2 07:55:35.208438 systemd-resolved[191]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:55:35.229906 kernel: audit: type=1130 audit(1719906935.219:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.229946 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:55:35.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.208500 systemd-resolved[191]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:55:35.249040 kernel: audit: type=1130 audit(1719906935.231:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.249100 kernel: Bridge firewalling registered Jul 2 07:55:35.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.215522 systemd-resolved[191]: Defaulting to hostname 'linux'. Jul 2 07:55:35.218210 systemd[1]: Started systemd-resolved.service. Jul 2 07:55:35.221458 systemd[1]: Finished dracut-cmdline-ask.service. Jul 2 07:55:35.233186 systemd[1]: Reached target nss-lookup.target. Jul 2 07:55:35.240401 systemd-modules-load[190]: Inserted module 'br_netfilter' Jul 2 07:55:35.241926 systemd[1]: Starting dracut-cmdline.service... Jul 2 07:55:35.265815 dracut-cmdline[205]: dracut-dracut-053 Jul 2 07:55:35.273006 kernel: SCSI subsystem initialized Jul 2 07:55:35.273043 dracut-cmdline[205]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 flatcar.first_boot=detected flatcar.oem.id=gce verity.usrhash=d29251fe942de56b08103b03939b6e5af4108e76dc6080fe2498c5db43f16e82 Jul 2 07:55:35.291388 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:55:35.291463 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:55:35.291487 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Jul 2 07:55:35.297536 systemd-modules-load[190]: Inserted module 'dm_multipath' Jul 2 07:55:35.298832 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:55:35.312032 kernel: audit: type=1130 audit(1719906935.301:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.304251 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:55:35.318770 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:55:35.329053 kernel: audit: type=1130 audit(1719906935.320:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.371921 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:55:35.392930 kernel: iscsi: registered transport (tcp) Jul 2 07:55:35.419926 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:55:35.420014 kernel: QLogic iSCSI HBA Driver Jul 2 07:55:35.467251 systemd[1]: Finished dracut-cmdline.service. Jul 2 07:55:35.476071 kernel: audit: type=1130 audit(1719906935.465:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.468893 systemd[1]: Starting dracut-pre-udev.service... Jul 2 07:55:35.530933 kernel: raid6: avx2x4 gen() 17919 MB/s Jul 2 07:55:35.544914 kernel: raid6: avx2x4 xor() 7888 MB/s Jul 2 07:55:35.561911 kernel: raid6: avx2x2 gen() 17922 MB/s Jul 2 07:55:35.578925 kernel: raid6: avx2x2 xor() 18388 MB/s Jul 2 07:55:35.595922 kernel: raid6: avx2x1 gen() 13882 MB/s Jul 2 07:55:35.612920 kernel: raid6: avx2x1 xor() 15948 MB/s Jul 2 07:55:35.629951 kernel: raid6: sse2x4 gen() 10810 MB/s Jul 2 07:55:35.646928 kernel: raid6: sse2x4 xor() 6588 MB/s Jul 2 07:55:35.663939 kernel: raid6: sse2x2 gen() 11627 MB/s Jul 2 07:55:35.680923 kernel: raid6: sse2x2 xor() 7359 MB/s Jul 2 07:55:35.697923 kernel: raid6: sse2x1 gen() 10394 MB/s Jul 2 07:55:35.717418 kernel: raid6: sse2x1 xor() 5145 MB/s Jul 2 07:55:35.717490 kernel: raid6: using algorithm avx2x2 gen() 17922 MB/s Jul 2 07:55:35.717514 kernel: raid6: .... xor() 18388 MB/s, rmw enabled Jul 2 07:55:35.717535 kernel: raid6: using avx2x2 recovery algorithm Jul 2 07:55:35.733930 kernel: xor: automatically using best checksumming function avx Jul 2 07:55:35.840935 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:55:35.852315 systemd[1]: Finished dracut-pre-udev.service. Jul 2 07:55:35.860050 kernel: audit: type=1130 audit(1719906935.850:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.850000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.854000 audit: BPF prog-id=7 op=LOAD Jul 2 07:55:35.854000 audit: BPF prog-id=8 op=LOAD Jul 2 07:55:35.856876 systemd[1]: Starting systemd-udevd.service... Jul 2 07:55:35.873212 systemd-udevd[387]: Using default interface naming scheme 'v252'. Jul 2 07:55:35.880393 systemd[1]: Started systemd-udevd.service. Jul 2 07:55:35.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.882953 systemd[1]: Starting dracut-pre-trigger.service... Jul 2 07:55:35.904331 dracut-pre-trigger[392]: rd.md=0: removing MD RAID activation Jul 2 07:55:35.945637 systemd[1]: Finished dracut-pre-trigger.service. Jul 2 07:55:35.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:35.950130 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:55:36.016699 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:55:36.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:36.101937 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:55:36.212914 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:55:36.235923 kernel: AES CTR mode by8 optimization enabled Jul 2 07:55:36.236028 kernel: scsi host0: Virtio SCSI HBA Jul 2 07:55:36.260929 kernel: scsi 0:0:1:0: Direct-Access Google PersistentDisk 1 PQ: 0 ANSI: 6 Jul 2 07:55:36.323634 kernel: sd 0:0:1:0: [sda] 25165824 512-byte logical blocks: (12.9 GB/12.0 GiB) Jul 2 07:55:36.323994 kernel: sd 0:0:1:0: [sda] 4096-byte physical blocks Jul 2 07:55:36.324201 kernel: sd 0:0:1:0: [sda] Write Protect is off Jul 2 07:55:36.328915 kernel: sd 0:0:1:0: [sda] Mode Sense: 1f 00 00 08 Jul 2 07:55:36.329194 kernel: sd 0:0:1:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jul 2 07:55:36.355388 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jul 2 07:55:36.355475 kernel: GPT:17805311 != 25165823 Jul 2 07:55:36.355498 kernel: GPT:Alternate GPT header not at the end of the disk. Jul 2 07:55:36.361646 kernel: GPT:17805311 != 25165823 Jul 2 07:55:36.365318 kernel: GPT: Use GNU Parted to correct GPT errors. Jul 2 07:55:36.370640 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:55:36.383294 kernel: sd 0:0:1:0: [sda] Attached SCSI disk Jul 2 07:55:36.426856 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Jul 2 07:55:36.460062 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (436) Jul 2 07:55:36.464533 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Jul 2 07:55:36.473177 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Jul 2 07:55:36.481277 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Jul 2 07:55:36.517329 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:55:36.531191 systemd[1]: Starting disk-uuid.service... Jul 2 07:55:36.556274 disk-uuid[516]: Primary Header is updated. Jul 2 07:55:36.556274 disk-uuid[516]: Secondary Entries is updated. Jul 2 07:55:36.556274 disk-uuid[516]: Secondary Header is updated. Jul 2 07:55:36.584045 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:55:36.590945 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:55:36.620935 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:55:37.608919 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:55:37.609174 disk-uuid[517]: The operation has completed successfully. Jul 2 07:55:37.681107 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:55:37.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:37.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:37.681246 systemd[1]: Finished disk-uuid.service. Jul 2 07:55:37.692628 systemd[1]: Starting verity-setup.service... Jul 2 07:55:37.722914 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:55:37.800646 systemd[1]: Found device dev-mapper-usr.device. Jul 2 07:55:37.809293 systemd[1]: Finished verity-setup.service. Jul 2 07:55:37.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:37.826293 systemd[1]: Mounting sysusr-usr.mount... Jul 2 07:55:37.925192 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:55:37.925795 systemd[1]: Mounted sysusr-usr.mount. Jul 2 07:55:37.926205 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Jul 2 07:55:37.971052 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:55:37.971103 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:55:37.971126 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:55:37.927145 systemd[1]: Starting ignition-setup.service... Jul 2 07:55:37.991050 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:55:37.985431 systemd[1]: Starting parse-ip-for-networkd.service... Jul 2 07:55:38.016622 systemd[1]: Finished ignition-setup.service. Jul 2 07:55:38.015000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.018407 systemd[1]: Starting ignition-fetch-offline.service... Jul 2 07:55:38.097703 systemd[1]: Finished parse-ip-for-networkd.service. Jul 2 07:55:38.104000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.105000 audit: BPF prog-id=9 op=LOAD Jul 2 07:55:38.108070 systemd[1]: Starting systemd-networkd.service... Jul 2 07:55:38.143083 systemd-networkd[691]: lo: Link UP Jul 2 07:55:38.143097 systemd-networkd[691]: lo: Gained carrier Jul 2 07:55:38.143904 systemd-networkd[691]: Enumeration completed Jul 2 07:55:38.164000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.144066 systemd[1]: Started systemd-networkd.service. Jul 2 07:55:38.144460 systemd-networkd[691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:55:38.146663 systemd-networkd[691]: eth0: Link UP Jul 2 07:55:38.146671 systemd-networkd[691]: eth0: Gained carrier Jul 2 07:55:38.158169 systemd-networkd[691]: eth0: DHCPv4 address 10.128.0.47/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:55:38.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.166294 systemd[1]: Reached target network.target. Jul 2 07:55:38.254167 iscsid[700]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:55:38.254167 iscsid[700]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:55:38.254167 iscsid[700]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:55:38.254167 iscsid[700]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:55:38.254167 iscsid[700]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:55:38.254167 iscsid[700]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:55:38.254167 iscsid[700]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:55:38.259000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.183208 systemd[1]: Starting iscsiuio.service... Jul 2 07:55:38.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.308517 ignition[605]: Ignition 2.14.0 Jul 2 07:55:38.213192 systemd[1]: Started iscsiuio.service. Jul 2 07:55:38.308531 ignition[605]: Stage: fetch-offline Jul 2 07:55:38.228430 systemd[1]: Starting iscsid.service... Jul 2 07:55:38.308625 ignition[605]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:38.247183 systemd[1]: Started iscsid.service. Jul 2 07:55:38.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.308666 ignition[605]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:38.262519 systemd[1]: Starting dracut-initqueue.service... Jul 2 07:55:38.331092 ignition[605]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:38.281374 systemd[1]: Finished dracut-initqueue.service. Jul 2 07:55:38.331301 ignition[605]: parsed url from cmdline: "" Jul 2 07:55:38.313227 systemd[1]: Reached target remote-fs-pre.target. Jul 2 07:55:38.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.331309 ignition[605]: no config URL provided Jul 2 07:55:38.342039 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:55:38.331318 ignition[605]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:55:38.358080 systemd[1]: Reached target remote-fs.target. Jul 2 07:55:38.331328 ignition[605]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:55:38.580000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.359280 systemd[1]: Starting dracut-pre-mount.service... Jul 2 07:55:38.331338 ignition[605]: failed to fetch config: resource requires networking Jul 2 07:55:38.375466 systemd[1]: Finished ignition-fetch-offline.service. Jul 2 07:55:38.331856 ignition[605]: Ignition finished successfully Jul 2 07:55:38.392549 systemd[1]: Finished dracut-pre-mount.service. Jul 2 07:55:38.431034 ignition[715]: Ignition 2.14.0 Jul 2 07:55:38.419408 systemd[1]: Starting ignition-fetch.service... Jul 2 07:55:38.431045 ignition[715]: Stage: fetch Jul 2 07:55:38.462047 unknown[715]: fetched base config from "system" Jul 2 07:55:38.431181 ignition[715]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:38.462060 unknown[715]: fetched base config from "system" Jul 2 07:55:38.431215 ignition[715]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:38.462070 unknown[715]: fetched user config from "gcp" Jul 2 07:55:38.439005 ignition[715]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:38.464793 systemd[1]: Finished ignition-fetch.service. Jul 2 07:55:38.439218 ignition[715]: parsed url from cmdline: "" Jul 2 07:55:38.478481 systemd[1]: Starting ignition-kargs.service... Jul 2 07:55:38.439224 ignition[715]: no config URL provided Jul 2 07:55:38.510493 systemd[1]: Finished ignition-kargs.service. Jul 2 07:55:38.439231 ignition[715]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:55:38.526491 systemd[1]: Starting ignition-disks.service... Jul 2 07:55:38.439242 ignition[715]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:55:38.567423 systemd[1]: Finished ignition-disks.service. Jul 2 07:55:38.439280 ignition[715]: GET http://169.254.169.254/computeMetadata/v1/instance/attributes/user-data: attempt #1 Jul 2 07:55:38.582311 systemd[1]: Reached target initrd-root-device.target. Jul 2 07:55:38.450090 ignition[715]: GET result: OK Jul 2 07:55:38.599087 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:55:38.450232 ignition[715]: parsing config with SHA512: 8e385305241356440f43f5882bc54851241e9234775933870265dc6e8ff0d976252ab2ec577d3b7769ece356bb499ee6c1aab3a30d94bd6c6a847e03965754b6 Jul 2 07:55:38.613068 systemd[1]: Reached target local-fs.target. Jul 2 07:55:38.462871 ignition[715]: fetch: fetch complete Jul 2 07:55:38.628099 systemd[1]: Reached target sysinit.target. Jul 2 07:55:38.462909 ignition[715]: fetch: fetch passed Jul 2 07:55:38.642086 systemd[1]: Reached target basic.target. Jul 2 07:55:38.462974 ignition[715]: Ignition finished successfully Jul 2 07:55:38.644006 systemd[1]: Starting systemd-fsck-root.service... Jul 2 07:55:38.493238 ignition[721]: Ignition 2.14.0 Jul 2 07:55:38.493249 ignition[721]: Stage: kargs Jul 2 07:55:38.493448 ignition[721]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:38.493479 ignition[721]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:38.500592 ignition[721]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:38.501956 ignition[721]: kargs: kargs passed Jul 2 07:55:38.502010 ignition[721]: Ignition finished successfully Jul 2 07:55:38.538359 ignition[727]: Ignition 2.14.0 Jul 2 07:55:38.538368 ignition[727]: Stage: disks Jul 2 07:55:38.538521 ignition[727]: reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:38.538553 ignition[727]: parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:38.548310 ignition[727]: no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:38.550074 ignition[727]: disks: disks passed Jul 2 07:55:38.550153 ignition[727]: Ignition finished successfully Jul 2 07:55:38.689214 systemd-fsck[735]: ROOT: clean, 614/1628000 files, 124057/1617920 blocks Jul 2 07:55:38.897837 systemd[1]: Finished systemd-fsck-root.service. Jul 2 07:55:38.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:38.907194 systemd[1]: Mounting sysroot.mount... Jul 2 07:55:38.938088 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:55:38.933902 systemd[1]: Mounted sysroot.mount. Jul 2 07:55:38.945397 systemd[1]: Reached target initrd-root-fs.target. Jul 2 07:55:38.965538 systemd[1]: Mounting sysroot-usr.mount... Jul 2 07:55:38.982574 systemd[1]: flatcar-metadata-hostname.service was skipped because no trigger condition checks were met. Jul 2 07:55:38.982655 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:55:38.982703 systemd[1]: Reached target ignition-diskful.target. Jul 2 07:55:39.003445 systemd[1]: Mounted sysroot-usr.mount. Jul 2 07:55:39.030030 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:55:39.081056 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (741) Jul 2 07:55:39.081098 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:55:39.081120 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:55:39.081142 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:55:39.081163 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:55:39.073649 systemd[1]: Starting initrd-setup-root.service... Jul 2 07:55:39.096239 initrd-setup-root[762]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:55:39.106034 initrd-setup-root[772]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:55:39.116049 initrd-setup-root[780]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:55:39.126050 initrd-setup-root[788]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:55:39.135688 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:55:39.178791 systemd[1]: Finished initrd-setup-root.service. Jul 2 07:55:39.221096 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 2 07:55:39.221142 kernel: audit: type=1130 audit(1719906939.185:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.185000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.188562 systemd[1]: Starting ignition-mount.service... Jul 2 07:55:39.229187 systemd[1]: Starting sysroot-boot.service... Jul 2 07:55:39.243333 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Jul 2 07:55:39.243485 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Jul 2 07:55:39.270130 ignition[807]: INFO : Ignition 2.14.0 Jul 2 07:55:39.270130 ignition[807]: INFO : Stage: mount Jul 2 07:55:39.270130 ignition[807]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:39.270130 ignition[807]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:39.405204 kernel: audit: type=1130 audit(1719906939.284:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.405246 kernel: audit: type=1130 audit(1719906939.320:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.405271 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (817) Jul 2 07:55:39.405289 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:55:39.405306 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:55:39.405321 kernel: BTRFS info (device sda6): has skinny extents Jul 2 07:55:39.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.320000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:39.274987 systemd[1]: Finished sysroot-boot.service. Jul 2 07:55:39.425185 kernel: BTRFS info (device sda6): enabling ssd optimizations Jul 2 07:55:39.425279 ignition[807]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:39.425279 ignition[807]: INFO : mount: mount passed Jul 2 07:55:39.425279 ignition[807]: INFO : Ignition finished successfully Jul 2 07:55:39.307705 systemd[1]: Finished ignition-mount.service. Jul 2 07:55:39.323415 systemd[1]: Starting ignition-files.service... Jul 2 07:55:39.476068 ignition[836]: INFO : Ignition 2.14.0 Jul 2 07:55:39.476068 ignition[836]: INFO : Stage: files Jul 2 07:55:39.476068 ignition[836]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:39.476068 ignition[836]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:39.476068 ignition[836]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:39.476068 ignition[836]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:55:39.476068 ignition[836]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:55:39.476068 ignition[836]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:55:39.476068 ignition[836]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:55:39.476068 ignition[836]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:55:39.476068 ignition[836]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:55:39.476068 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/hosts" Jul 2 07:55:39.476068 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:55:39.647076 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (839) Jul 2 07:55:39.357566 systemd[1]: Mounting sysroot-usr-share-oem.mount... Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(4): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1772636103" Jul 2 07:55:39.656074 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(3): op(4): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1772636103": device or resource busy Jul 2 07:55:39.656074 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(3): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1772636103", trying btrfs: device or resource busy Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1772636103" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(5): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1772636103" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [started] unmounting "/mnt/oem1772636103" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): op(6): [finished] unmounting "/mnt/oem1772636103" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/hosts" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:55:39.656074 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Jul 2 07:55:39.363189 systemd-networkd[691]: eth0: Gained IPv6LL Jul 2 07:55:39.842015 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:55:39.842015 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:55:39.842015 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Jul 2 07:55:39.420988 systemd[1]: Mounted sysroot-usr-share-oem.mount. Jul 2 07:55:39.473392 unknown[836]: wrote ssh authorized keys file for user: core Jul 2 07:55:40.034449 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Jul 2 07:55:40.183901 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4163324050" Jul 2 07:55:40.200053 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(9): op(a): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4163324050": device or resource busy Jul 2 07:55:40.200053 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(9): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4163324050", trying btrfs: device or resource busy Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4163324050" Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(b): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4163324050" Jul 2 07:55:40.200053 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [started] unmounting "/mnt/oem4163324050" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): op(c): [finished] unmounting "/mnt/oem4163324050" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/profile.d/google-cloud-sdk.sh" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(11): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:55:40.339060 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:55:40.202910 systemd[1]: mnt-oem4163324050.mount: Deactivated successfully. Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem294109620" Jul 2 07:55:40.593138 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem294109620": device or resource busy Jul 2 07:55:40.593138 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem294109620", trying btrfs: device or resource busy Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem294109620" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem294109620" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem294109620" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem294109620" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/oem-gce-enable-oslogin.service" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 Jul 2 07:55:40.593138 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): GET result: OK Jul 2 07:55:40.827076 kernel: audit: type=1130 audit(1719906940.788:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.788000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.230351 systemd[1]: mnt-oem294109620.mount: Deactivated successfully. Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(17): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [started] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): oem config not found in "/usr/share/oem", looking on oem partition Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(19): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2932477701" Jul 2 07:55:40.844093 ignition[836]: CRITICAL : files: createFilesystemsFiles: createFiles: op(18): op(19): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2932477701": device or resource busy Jul 2 07:55:40.844093 ignition[836]: ERROR : files: createFilesystemsFiles: createFiles: op(18): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2932477701", trying btrfs: device or resource busy Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2932477701" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1a): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2932477701" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [started] unmounting "/mnt/oem2932477701" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): op(1b): [finished] unmounting "/mnt/oem2932477701" Jul 2 07:55:40.844093 ignition[836]: INFO : files: createFilesystemsFiles: createFiles: op(18): [finished] writing file "/sysroot/etc/systemd/system/oem-gce.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1c): [started] processing unit "oem-gce.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1c): [finished] processing unit "oem-gce.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1d): [started] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1d): [finished] processing unit "oem-gce-enable-oslogin.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1e): [started] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:55:40.844093 ignition[836]: INFO : files: op(1e): [finished] processing unit "coreos-metadata-sshkeys@.service" Jul 2 07:55:41.330092 kernel: audit: type=1130 audit(1719906940.877:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.330140 kernel: audit: type=1130 audit(1719906940.923:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.330164 kernel: audit: type=1131 audit(1719906940.923:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.330186 kernel: audit: type=1130 audit(1719906941.031:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.330208 kernel: audit: type=1131 audit(1719906941.031:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.330223 kernel: audit: type=1130 audit(1719906941.161:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.923000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.031000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.779990 systemd[1]: Finished ignition-files.service. Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(1f): [started] processing unit "prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(1f): op(20): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(1f): op(20): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(1f): [finished] processing unit "prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(21): [started] setting preset to enabled for "oem-gce.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(21): [finished] setting preset to enabled for "oem-gce.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(22): [started] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(22): [finished] setting preset to enabled for "oem-gce-enable-oslogin.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(23): [started] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(23): [finished] setting preset to enabled for "coreos-metadata-sshkeys@.service " Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:55:41.344095 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:55:41.344095 ignition[836]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:55:41.344095 ignition[836]: INFO : files: files passed Jul 2 07:55:41.344095 ignition[836]: INFO : Ignition finished successfully Jul 2 07:55:41.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.592000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.618000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.633000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.800129 systemd[1]: Starting initrd-setup-root-after-ignition.service... Jul 2 07:55:41.655481 initrd-setup-root-after-ignition[859]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:55:40.836067 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Jul 2 07:55:41.701171 iscsid[700]: iscsid shutting down. Jul 2 07:55:41.707000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.837229 systemd[1]: Starting ignition-quench.service... Jul 2 07:55:40.851412 systemd[1]: Finished initrd-setup-root-after-ignition.service. Jul 2 07:55:40.879450 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:55:40.879583 systemd[1]: Finished ignition-quench.service. Jul 2 07:55:41.756000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.925399 systemd[1]: Reached target ignition-complete.target. Jul 2 07:55:41.771000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:40.983615 systemd[1]: Starting initrd-parse-etc.service... Jul 2 07:55:41.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.796214 ignition[874]: INFO : Ignition 2.14.0 Jul 2 07:55:41.796214 ignition[874]: INFO : Stage: umount Jul 2 07:55:41.796214 ignition[874]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Jul 2 07:55:41.796214 ignition[874]: DEBUG : parsing config with SHA512: 28536912712fffc63406b6accf8759a9de2528d78fa3e153de6c4a0ac81102f9876238326a650eaef6ce96ba6e26bae8fbbfe85a3f956a15fdad11da447b6af6 Jul 2 07:55:41.796214 ignition[874]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/gcp" Jul 2 07:55:41.796214 ignition[874]: INFO : umount: umount passed Jul 2 07:55:41.796214 ignition[874]: INFO : Ignition finished successfully Jul 2 07:55:41.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.833000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.029681 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:55:41.897000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.029814 systemd[1]: Finished initrd-parse-etc.service. Jul 2 07:55:41.913000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.033386 systemd[1]: Reached target initrd-fs.target. Jul 2 07:55:41.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.100257 systemd[1]: Reached target initrd.target. Jul 2 07:55:41.122312 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Jul 2 07:55:41.123608 systemd[1]: Starting dracut-pre-pivot.service... Jul 2 07:55:41.144488 systemd[1]: Finished dracut-pre-pivot.service. Jul 2 07:55:41.164823 systemd[1]: Starting initrd-cleanup.service... Jul 2 07:55:41.213138 systemd[1]: Stopped target nss-lookup.target. Jul 2 07:55:42.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.233347 systemd[1]: Stopped target remote-cryptsetup.target. Jul 2 07:55:42.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.252389 systemd[1]: Stopped target timers.target. Jul 2 07:55:41.263447 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:55:42.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.263653 systemd[1]: Stopped dracut-pre-pivot.service. Jul 2 07:55:41.281691 systemd[1]: Stopped target initrd.target. Jul 2 07:55:41.317365 systemd[1]: Stopped target basic.target. Jul 2 07:55:41.337333 systemd[1]: Stopped target ignition-complete.target. Jul 2 07:55:41.344424 systemd[1]: Stopped target ignition-diskful.target. Jul 2 07:55:42.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.362398 systemd[1]: Stopped target initrd-root-device.target. Jul 2 07:55:42.150000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:42.150000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:55:41.384393 systemd[1]: Stopped target remote-fs.target. Jul 2 07:55:41.408422 systemd[1]: Stopped target remote-fs-pre.target. Jul 2 07:55:41.425485 systemd[1]: Stopped target sysinit.target. Jul 2 07:55:42.196000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.443474 systemd[1]: Stopped target local-fs.target. Jul 2 07:55:42.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.461481 systemd[1]: Stopped target local-fs-pre.target. Jul 2 07:55:42.226000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.481508 systemd[1]: Stopped target swap.target. Jul 2 07:55:41.521290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:55:42.249000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.521486 systemd[1]: Stopped dracut-pre-mount.service. Jul 2 07:55:41.535538 systemd[1]: Stopped target cryptsetup.target. Jul 2 07:55:41.571285 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:55:42.303000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.571497 systemd[1]: Stopped dracut-initqueue.service. Jul 2 07:55:42.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.594433 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:55:42.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.594630 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Jul 2 07:55:42.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.620395 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:55:42.376000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.620582 systemd[1]: Stopped ignition-files.service. Jul 2 07:55:42.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:42.393000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:41.636718 systemd[1]: Stopping ignition-mount.service... Jul 2 07:55:41.663491 systemd[1]: Stopping iscsid.service... Jul 2 07:55:41.684156 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:55:41.684461 systemd[1]: Stopped kmod-static-nodes.service. Jul 2 07:55:41.710728 systemd[1]: Stopping sysroot-boot.service... Jul 2 07:55:41.741080 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:55:42.472081 systemd-journald[189]: Received SIGTERM from PID 1 (n/a). Jul 2 07:55:41.741397 systemd[1]: Stopped systemd-udev-trigger.service. Jul 2 07:55:41.758408 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:55:41.758595 systemd[1]: Stopped dracut-pre-trigger.service. Jul 2 07:55:41.777785 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:55:41.779012 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:55:41.779137 systemd[1]: Stopped iscsid.service. Jul 2 07:55:41.788875 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:55:41.789015 systemd[1]: Stopped ignition-mount.service. Jul 2 07:55:41.804947 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:55:41.805072 systemd[1]: Stopped sysroot-boot.service. Jul 2 07:55:41.826485 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:55:41.826612 systemd[1]: Finished initrd-cleanup.service. Jul 2 07:55:41.835503 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:55:41.835580 systemd[1]: Stopped ignition-disks.service. Jul 2 07:55:41.878308 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:55:41.878386 systemd[1]: Stopped ignition-kargs.service. Jul 2 07:55:41.899316 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:55:41.899390 systemd[1]: Stopped ignition-fetch.service. Jul 2 07:55:41.915227 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:55:41.915320 systemd[1]: Stopped ignition-fetch-offline.service. Jul 2 07:55:41.930288 systemd[1]: Stopped target paths.target. Jul 2 07:55:41.944061 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:55:41.946008 systemd[1]: Stopped systemd-ask-password-console.path. Jul 2 07:55:41.951231 systemd[1]: Stopped target slices.target. Jul 2 07:55:41.969261 systemd[1]: Stopped target sockets.target. Jul 2 07:55:41.991238 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:55:41.991302 systemd[1]: Closed iscsid.socket. Jul 2 07:55:41.998333 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:55:41.998408 systemd[1]: Stopped ignition-setup.service. Jul 2 07:55:42.011396 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:55:42.011493 systemd[1]: Stopped initrd-setup-root.service. Jul 2 07:55:42.032376 systemd[1]: Stopping iscsiuio.service... Jul 2 07:55:42.047460 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:55:42.047596 systemd[1]: Stopped iscsiuio.service. Jul 2 07:55:42.061265 systemd[1]: Stopped target network.target. Jul 2 07:55:42.078102 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:55:42.078176 systemd[1]: Closed iscsiuio.socket. Jul 2 07:55:42.092280 systemd[1]: Stopping systemd-networkd.service... Jul 2 07:55:42.095962 systemd-networkd[691]: eth0: DHCPv6 lease lost Jul 2 07:55:42.472000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:55:42.099518 systemd[1]: Stopping systemd-resolved.service... Jul 2 07:55:42.119391 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:55:42.119516 systemd[1]: Stopped systemd-resolved.service. Jul 2 07:55:42.135804 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:55:42.135951 systemd[1]: Stopped systemd-networkd.service. Jul 2 07:55:42.152936 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:55:42.152991 systemd[1]: Closed systemd-networkd.socket. Jul 2 07:55:42.169166 systemd[1]: Stopping network-cleanup.service... Jul 2 07:55:42.182115 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:55:42.182230 systemd[1]: Stopped parse-ip-for-networkd.service. Jul 2 07:55:42.198286 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:55:42.198367 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:55:42.213276 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:55:42.213341 systemd[1]: Stopped systemd-modules-load.service. Jul 2 07:55:42.228416 systemd[1]: Stopping systemd-udevd.service... Jul 2 07:55:42.245573 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:55:42.246246 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:55:42.246402 systemd[1]: Stopped systemd-udevd.service. Jul 2 07:55:42.252643 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:55:42.252728 systemd[1]: Closed systemd-udevd-control.socket. Jul 2 07:55:42.275181 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:55:42.275247 systemd[1]: Closed systemd-udevd-kernel.socket. Jul 2 07:55:42.290142 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:55:42.290275 systemd[1]: Stopped dracut-pre-udev.service. Jul 2 07:55:42.305228 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:55:42.305310 systemd[1]: Stopped dracut-cmdline.service. Jul 2 07:55:42.321201 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:55:42.321282 systemd[1]: Stopped dracut-cmdline-ask.service. Jul 2 07:55:42.337239 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Jul 2 07:55:42.354062 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:55:42.354192 systemd[1]: Stopped systemd-vconsole-setup.service. Jul 2 07:55:42.354857 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:55:42.355002 systemd[1]: Stopped network-cleanup.service. Jul 2 07:55:42.378610 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:55:42.378727 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Jul 2 07:55:42.395501 systemd[1]: Reached target initrd-switch-root.target. Jul 2 07:55:42.411256 systemd[1]: Starting initrd-switch-root.service... Jul 2 07:55:42.433152 systemd[1]: Switching root. Jul 2 07:55:42.476231 systemd-journald[189]: Journal stopped Jul 2 07:55:47.170446 kernel: SELinux: Class mctp_socket not defined in policy. Jul 2 07:55:47.170561 kernel: SELinux: Class anon_inode not defined in policy. Jul 2 07:55:47.170585 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:55:47.170618 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:55:47.170640 kernel: SELinux: policy capability open_perms=1 Jul 2 07:55:47.170663 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:55:47.170691 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:55:47.170713 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:55:47.170734 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:55:47.170756 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:55:47.170778 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:55:47.170804 systemd[1]: Successfully loaded SELinux policy in 110.877ms. Jul 2 07:55:47.170853 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.072ms. Jul 2 07:55:47.170879 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:55:47.170915 systemd[1]: Detected virtualization kvm. Jul 2 07:55:47.170939 systemd[1]: Detected architecture x86-64. Jul 2 07:55:47.170962 systemd[1]: Detected first boot. Jul 2 07:55:47.170985 systemd[1]: Initializing machine ID from VM UUID. Jul 2 07:55:47.171009 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Jul 2 07:55:47.171039 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:55:47.171072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:47.171102 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:47.171129 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:47.171158 kernel: kauditd_printk_skb: 51 callbacks suppressed Jul 2 07:55:47.171181 kernel: audit: type=1334 audit(1719906946.299:86): prog-id=12 op=LOAD Jul 2 07:55:47.171203 kernel: audit: type=1334 audit(1719906946.299:87): prog-id=3 op=UNLOAD Jul 2 07:55:47.171226 kernel: audit: type=1334 audit(1719906946.304:88): prog-id=13 op=LOAD Jul 2 07:55:47.171252 kernel: audit: type=1334 audit(1719906946.311:89): prog-id=14 op=LOAD Jul 2 07:55:47.171275 kernel: audit: type=1334 audit(1719906946.311:90): prog-id=4 op=UNLOAD Jul 2 07:55:47.171296 kernel: audit: type=1334 audit(1719906946.311:91): prog-id=5 op=UNLOAD Jul 2 07:55:47.171319 kernel: audit: type=1334 audit(1719906946.318:92): prog-id=15 op=LOAD Jul 2 07:55:47.171341 kernel: audit: type=1334 audit(1719906946.318:93): prog-id=12 op=UNLOAD Jul 2 07:55:47.171364 kernel: audit: type=1334 audit(1719906946.353:94): prog-id=16 op=LOAD Jul 2 07:55:47.171386 kernel: audit: type=1334 audit(1719906946.360:95): prog-id=17 op=LOAD Jul 2 07:55:47.171409 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:55:47.171433 systemd[1]: Stopped initrd-switch-root.service. Jul 2 07:55:47.171462 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:55:47.171486 systemd[1]: Created slice system-addon\x2dconfig.slice. Jul 2 07:55:47.171510 systemd[1]: Created slice system-addon\x2drun.slice. Jul 2 07:55:47.171535 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice. Jul 2 07:55:47.171563 systemd[1]: Created slice system-getty.slice. Jul 2 07:55:47.171587 systemd[1]: Created slice system-modprobe.slice. Jul 2 07:55:47.171611 systemd[1]: Created slice system-serial\x2dgetty.slice. Jul 2 07:55:47.171639 systemd[1]: Created slice system-system\x2dcloudinit.slice. Jul 2 07:55:47.171665 systemd[1]: Created slice system-systemd\x2dfsck.slice. Jul 2 07:55:47.171689 systemd[1]: Created slice user.slice. Jul 2 07:55:47.171718 systemd[1]: Started systemd-ask-password-console.path. Jul 2 07:55:47.171743 systemd[1]: Started systemd-ask-password-wall.path. Jul 2 07:55:47.171767 systemd[1]: Set up automount boot.automount. Jul 2 07:55:47.171790 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Jul 2 07:55:47.171814 systemd[1]: Stopped target initrd-switch-root.target. Jul 2 07:55:47.171837 systemd[1]: Stopped target initrd-fs.target. Jul 2 07:55:47.171865 systemd[1]: Stopped target initrd-root-fs.target. Jul 2 07:55:47.180534 systemd[1]: Reached target integritysetup.target. Jul 2 07:55:47.180593 systemd[1]: Reached target remote-cryptsetup.target. Jul 2 07:55:47.180620 systemd[1]: Reached target remote-fs.target. Jul 2 07:55:47.180645 systemd[1]: Reached target slices.target. Jul 2 07:55:47.180670 systemd[1]: Reached target swap.target. Jul 2 07:55:47.180693 systemd[1]: Reached target torcx.target. Jul 2 07:55:47.180725 systemd[1]: Reached target veritysetup.target. Jul 2 07:55:47.180749 systemd[1]: Listening on systemd-coredump.socket. Jul 2 07:55:47.180773 systemd[1]: Listening on systemd-initctl.socket. Jul 2 07:55:47.180804 systemd[1]: Listening on systemd-networkd.socket. Jul 2 07:55:47.180826 systemd[1]: Listening on systemd-udevd-control.socket. Jul 2 07:55:47.180850 systemd[1]: Listening on systemd-udevd-kernel.socket. Jul 2 07:55:47.180875 systemd[1]: Listening on systemd-userdbd.socket. Jul 2 07:55:47.180921 systemd[1]: Mounting dev-hugepages.mount... Jul 2 07:55:47.180946 systemd[1]: Mounting dev-mqueue.mount... Jul 2 07:55:47.180970 systemd[1]: Mounting media.mount... Jul 2 07:55:47.180994 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:47.181019 systemd[1]: Mounting sys-kernel-debug.mount... Jul 2 07:55:47.181047 systemd[1]: Mounting sys-kernel-tracing.mount... Jul 2 07:55:47.181079 systemd[1]: Mounting tmp.mount... Jul 2 07:55:47.181104 systemd[1]: Starting flatcar-tmpfiles.service... Jul 2 07:55:47.181130 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:55:47.181154 systemd[1]: Starting kmod-static-nodes.service... Jul 2 07:55:47.181178 systemd[1]: Starting modprobe@configfs.service... Jul 2 07:55:47.181202 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:55:47.181226 systemd[1]: Starting modprobe@drm.service... Jul 2 07:55:47.181250 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:55:47.181279 systemd[1]: Starting modprobe@fuse.service... Jul 2 07:55:47.181304 systemd[1]: Starting modprobe@loop.service... Jul 2 07:55:47.181328 kernel: fuse: init (API version 7.34) Jul 2 07:55:47.181354 kernel: loop: module loaded Jul 2 07:55:47.181380 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:55:47.181404 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:55:47.181427 systemd[1]: Stopped systemd-fsck-root.service. Jul 2 07:55:47.181450 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:55:47.181475 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:55:47.181502 systemd[1]: Stopped systemd-journald.service. Jul 2 07:55:47.181525 systemd[1]: Starting systemd-journald.service... Jul 2 07:55:47.181549 systemd[1]: Starting systemd-modules-load.service... Jul 2 07:55:47.181574 systemd[1]: Starting systemd-network-generator.service... Jul 2 07:55:47.181603 systemd-journald[998]: Journal started Jul 2 07:55:47.181712 systemd-journald[998]: Runtime Journal (/run/log/journal/70f1e54b7edfc1712eba4424830c70b5) is 8.0M, max 148.8M, 140.8M free. Jul 2 07:55:42.790000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:55:42.943000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:55:42.943000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Jul 2 07:55:42.943000 audit: BPF prog-id=10 op=LOAD Jul 2 07:55:42.943000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:55:42.943000 audit: BPF prog-id=11 op=LOAD Jul 2 07:55:42.943000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:55:43.097000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Jul 2 07:55:43.097000 audit[907]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001058e2 a1=c00002ae40 a2=c000029100 a3=32 items=0 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:43.097000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:55:43.108000 audit[907]: AVC avc: denied { associate } for pid=907 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Jul 2 07:55:43.108000 audit[907]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001059b9 a2=1ed a3=0 items=2 ppid=890 pid=907 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:43.108000 audit: CWD cwd="/" Jul 2 07:55:43.108000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:43.108000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:43.108000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Jul 2 07:55:46.299000 audit: BPF prog-id=12 op=LOAD Jul 2 07:55:46.299000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:55:46.304000 audit: BPF prog-id=13 op=LOAD Jul 2 07:55:46.311000 audit: BPF prog-id=14 op=LOAD Jul 2 07:55:46.311000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:55:46.311000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:55:46.318000 audit: BPF prog-id=15 op=LOAD Jul 2 07:55:46.318000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:55:46.353000 audit: BPF prog-id=16 op=LOAD Jul 2 07:55:46.360000 audit: BPF prog-id=17 op=LOAD Jul 2 07:55:46.360000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:55:46.360000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:55:46.367000 audit: BPF prog-id=18 op=LOAD Jul 2 07:55:46.367000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:55:46.374000 audit: BPF prog-id=19 op=LOAD Jul 2 07:55:46.374000 audit: BPF prog-id=20 op=LOAD Jul 2 07:55:46.374000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:55:46.374000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:55:46.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:46.390000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:55:46.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:46.398000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.099000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.121000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.135000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.136000 audit: BPF prog-id=21 op=LOAD Jul 2 07:55:47.136000 audit: BPF prog-id=22 op=LOAD Jul 2 07:55:47.136000 audit: BPF prog-id=23 op=LOAD Jul 2 07:55:47.136000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:55:47.136000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:55:47.166000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:55:47.166000 audit[998]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff872af4d0 a2=4000 a3=7fff872af56c items=0 ppid=1 pid=998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:47.166000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:55:43.093205 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:46.299048 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:55:43.094386 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:55:46.377150 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:55:43.094421 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:55:43.094485 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Jul 2 07:55:43.094505 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="skipped missing lower profile" missing profile=oem Jul 2 07:55:43.094561 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Jul 2 07:55:43.094587 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Jul 2 07:55:43.094916 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Jul 2 07:55:43.095004 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Jul 2 07:55:43.095031 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Jul 2 07:55:43.097788 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Jul 2 07:55:43.097860 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Jul 2 07:55:43.097909 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.5: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.5 Jul 2 07:55:43.097936 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Jul 2 07:55:43.097968 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.5: no such file or directory" path=/var/lib/torcx/store/3510.3.5 Jul 2 07:55:43.098007 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:43Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Jul 2 07:55:45.682336 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:55:45.682640 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:55:45.682772 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:55:45.683067 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Jul 2 07:55:45.683132 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Jul 2 07:55:45.683208 /usr/lib/systemd/system-generators/torcx-generator[907]: time="2024-07-02T07:55:45Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Jul 2 07:55:47.198925 systemd[1]: Starting systemd-remount-fs.service... Jul 2 07:55:47.213998 systemd[1]: Starting systemd-udev-trigger.service... Jul 2 07:55:47.227925 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:55:47.233943 systemd[1]: Stopped verity-setup.service. Jul 2 07:55:47.238000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.252904 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:47.261935 systemd[1]: Started systemd-journald.service. Jul 2 07:55:47.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.271374 systemd[1]: Mounted dev-hugepages.mount. Jul 2 07:55:47.278241 systemd[1]: Mounted dev-mqueue.mount. Jul 2 07:55:47.286267 systemd[1]: Mounted media.mount. Jul 2 07:55:47.293249 systemd[1]: Mounted sys-kernel-debug.mount. Jul 2 07:55:47.302238 systemd[1]: Mounted sys-kernel-tracing.mount. Jul 2 07:55:47.312221 systemd[1]: Mounted tmp.mount. Jul 2 07:55:47.319502 systemd[1]: Finished flatcar-tmpfiles.service. Jul 2 07:55:47.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.328481 systemd[1]: Finished kmod-static-nodes.service. Jul 2 07:55:47.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.337483 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:55:47.337732 systemd[1]: Finished modprobe@configfs.service. Jul 2 07:55:47.344000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.346498 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:55:47.346715 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:55:47.353000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.355543 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:55:47.355756 systemd[1]: Finished modprobe@drm.service. Jul 2 07:55:47.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.364518 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:55:47.364733 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:55:47.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.371000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.373547 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:55:47.373762 systemd[1]: Finished modprobe@fuse.service. Jul 2 07:55:47.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.382484 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:55:47.382700 systemd[1]: Finished modprobe@loop.service. Jul 2 07:55:47.389000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.391621 systemd[1]: Finished systemd-modules-load.service. Jul 2 07:55:47.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.400487 systemd[1]: Finished systemd-network-generator.service. Jul 2 07:55:47.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.409505 systemd[1]: Finished systemd-remount-fs.service. Jul 2 07:55:47.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.418519 systemd[1]: Finished systemd-udev-trigger.service. Jul 2 07:55:47.425000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.427922 systemd[1]: Reached target network-pre.target. Jul 2 07:55:47.437639 systemd[1]: Mounting sys-fs-fuse-connections.mount... Jul 2 07:55:47.447617 systemd[1]: Mounting sys-kernel-config.mount... Jul 2 07:55:47.455131 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:55:47.458432 systemd[1]: Starting systemd-hwdb-update.service... Jul 2 07:55:47.467879 systemd[1]: Starting systemd-journal-flush.service... Jul 2 07:55:47.476142 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:55:47.479035 systemd[1]: Starting systemd-random-seed.service... Jul 2 07:55:47.480177 systemd-journald[998]: Time spent on flushing to /var/log/journal/70f1e54b7edfc1712eba4424830c70b5 is 78.008ms for 1158 entries. Jul 2 07:55:47.480177 systemd-journald[998]: System Journal (/var/log/journal/70f1e54b7edfc1712eba4424830c70b5) is 8.0M, max 584.8M, 576.8M free. Jul 2 07:55:47.597204 systemd-journald[998]: Received client request to flush runtime journal. Jul 2 07:55:47.549000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:47.494113 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:55:47.496107 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:55:47.505163 systemd[1]: Starting systemd-sysusers.service... Jul 2 07:55:47.599970 udevadm[1012]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jul 2 07:55:47.513977 systemd[1]: Starting systemd-udev-settle.service... Jul 2 07:55:47.524612 systemd[1]: Mounted sys-fs-fuse-connections.mount. Jul 2 07:55:47.533234 systemd[1]: Mounted sys-kernel-config.mount. Jul 2 07:55:47.542437 systemd[1]: Finished systemd-random-seed.service. Jul 2 07:55:47.551531 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:55:47.563800 systemd[1]: Reached target first-boot-complete.target. Jul 2 07:55:47.581921 systemd[1]: Finished systemd-sysusers.service. Jul 2 07:55:47.598583 systemd[1]: Finished systemd-journal-flush.service. Jul 2 07:55:47.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.188405 systemd[1]: Finished systemd-hwdb-update.service. Jul 2 07:55:48.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.196000 audit: BPF prog-id=24 op=LOAD Jul 2 07:55:48.196000 audit: BPF prog-id=25 op=LOAD Jul 2 07:55:48.196000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:55:48.196000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:55:48.199198 systemd[1]: Starting systemd-udevd.service... Jul 2 07:55:48.221827 systemd-udevd[1015]: Using default interface naming scheme 'v252'. Jul 2 07:55:48.273457 systemd[1]: Started systemd-udevd.service. Jul 2 07:55:48.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.282000 audit: BPF prog-id=26 op=LOAD Jul 2 07:55:48.285754 systemd[1]: Starting systemd-networkd.service... Jul 2 07:55:48.300000 audit: BPF prog-id=27 op=LOAD Jul 2 07:55:48.300000 audit: BPF prog-id=28 op=LOAD Jul 2 07:55:48.300000 audit: BPF prog-id=29 op=LOAD Jul 2 07:55:48.303426 systemd[1]: Starting systemd-userdbd.service... Jul 2 07:55:48.359011 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Jul 2 07:55:48.367501 systemd[1]: Started systemd-userdbd.service. Jul 2 07:55:48.376000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.462000 audit[1023]: AVC avc: denied { confidentiality } for pid=1023 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Jul 2 07:55:48.462000 audit[1023]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55b7b0be5d30 a1=3207c a2=7f27ac6b4bc5 a3=5 items=108 ppid=1015 pid=1023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:48.462000 audit: CWD cwd="/" Jul 2 07:55:48.462000 audit: PATH item=0 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=1 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=2 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=3 name=(null) inode=13678 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=4 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=5 name=(null) inode=13679 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=6 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=7 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=8 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=9 name=(null) inode=13681 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=10 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=11 name=(null) inode=13682 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=12 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=13 name=(null) inode=13683 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=14 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=15 name=(null) inode=13684 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=16 name=(null) inode=13680 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=17 name=(null) inode=13685 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=18 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=19 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=20 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=21 name=(null) inode=13687 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=22 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=23 name=(null) inode=13688 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=24 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=25 name=(null) inode=13689 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=26 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=27 name=(null) inode=13690 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=28 name=(null) inode=13686 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=29 name=(null) inode=13691 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=30 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=31 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=32 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=33 name=(null) inode=13693 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=34 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=35 name=(null) inode=13694 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=36 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=37 name=(null) inode=13695 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=38 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=39 name=(null) inode=13696 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=40 name=(null) inode=13692 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=41 name=(null) inode=13697 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=42 name=(null) inode=13677 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=43 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=44 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=45 name=(null) inode=13699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=46 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=47 name=(null) inode=13700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=48 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=49 name=(null) inode=13701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=50 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=51 name=(null) inode=13702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=52 name=(null) inode=13698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=53 name=(null) inode=13703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=54 name=(null) inode=1042 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=55 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=56 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=57 name=(null) inode=13705 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=58 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=59 name=(null) inode=13706 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=60 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=61 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=62 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=63 name=(null) inode=13708 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=64 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=65 name=(null) inode=13709 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=66 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=67 name=(null) inode=13710 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=68 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=69 name=(null) inode=13711 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=70 name=(null) inode=13707 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=71 name=(null) inode=13712 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=72 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=73 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=74 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=75 name=(null) inode=13714 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=76 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=77 name=(null) inode=13715 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=78 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=79 name=(null) inode=13716 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=80 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=81 name=(null) inode=13717 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=82 name=(null) inode=13713 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=83 name=(null) inode=13718 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=84 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=85 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=86 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=87 name=(null) inode=13720 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=88 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=89 name=(null) inode=13721 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=90 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=91 name=(null) inode=13722 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=92 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=93 name=(null) inode=13723 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=94 name=(null) inode=13719 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=95 name=(null) inode=13724 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=96 name=(null) inode=13704 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=97 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=98 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=99 name=(null) inode=13726 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=100 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=101 name=(null) inode=13727 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=102 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=103 name=(null) inode=13728 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=104 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=105 name=(null) inode=13729 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=106 name=(null) inode=13725 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PATH item=107 name=(null) inode=13730 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Jul 2 07:55:48.462000 audit: PROCTITLE proctitle="(udev-worker)" Jul 2 07:55:48.521086 kernel: piix4_smbus 0000:00:01.3: SMBus base address uninitialized - upgrade BIOS or use force_addr=0xaddr Jul 2 07:55:48.526837 systemd-networkd[1029]: lo: Link UP Jul 2 07:55:48.526851 systemd-networkd[1029]: lo: Gained carrier Jul 2 07:55:48.527611 systemd-networkd[1029]: Enumeration completed Jul 2 07:55:48.527773 systemd[1]: Started systemd-networkd.service. Jul 2 07:55:48.528167 systemd-networkd[1029]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:55:48.530237 systemd-networkd[1029]: eth0: Link UP Jul 2 07:55:48.530253 systemd-networkd[1029]: eth0: Gained carrier Jul 2 07:55:48.545082 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input2 Jul 2 07:55:48.545183 kernel: EDAC MC: Ver: 3.0.0 Jul 2 07:55:48.543000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.550091 systemd-networkd[1029]: eth0: DHCPv4 address 10.128.0.47/32, gateway 10.128.0.1 acquired from 169.254.169.254 Jul 2 07:55:48.564917 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Jul 2 07:55:48.596914 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1030) Jul 2 07:55:48.631570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Jul 2 07:55:48.639917 kernel: ACPI: button: Power Button [PWRF] Jul 2 07:55:48.650987 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSLPBN:00/input/input4 Jul 2 07:55:48.657915 kernel: ACPI: button: Sleep Button [SLPF] Jul 2 07:55:48.703931 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:55:48.718456 systemd[1]: Finished systemd-udev-settle.service. Jul 2 07:55:48.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.728751 systemd[1]: Starting lvm2-activation-early.service... Jul 2 07:55:48.758064 lvm[1053]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:55:48.787268 systemd[1]: Finished lvm2-activation-early.service. Jul 2 07:55:48.794000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.796235 systemd[1]: Reached target cryptsetup.target. Jul 2 07:55:48.806592 systemd[1]: Starting lvm2-activation.service... Jul 2 07:55:48.812550 lvm[1054]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:55:48.844346 systemd[1]: Finished lvm2-activation.service. Jul 2 07:55:48.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.853249 systemd[1]: Reached target local-fs-pre.target. Jul 2 07:55:48.862184 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:55:48.862245 systemd[1]: Reached target local-fs.target. Jul 2 07:55:48.871048 systemd[1]: Reached target machines.target. Jul 2 07:55:48.881606 systemd[1]: Starting ldconfig.service... Jul 2 07:55:48.890044 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:55:48.890148 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:48.891776 systemd[1]: Starting systemd-boot-update.service... Jul 2 07:55:48.901931 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Jul 2 07:55:48.914069 systemd[1]: Starting systemd-machine-id-commit.service... Jul 2 07:55:48.917572 systemd[1]: Starting systemd-sysext.service... Jul 2 07:55:48.918565 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1056 (bootctl) Jul 2 07:55:48.922278 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Jul 2 07:55:48.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:48.944557 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Jul 2 07:55:48.948704 systemd[1]: Unmounting usr-share-oem.mount... Jul 2 07:55:48.959380 systemd[1]: usr-share-oem.mount: Deactivated successfully. Jul 2 07:55:48.959664 systemd[1]: Unmounted usr-share-oem.mount. Jul 2 07:55:48.982939 kernel: loop0: detected capacity change from 0 to 210664 Jul 2 07:55:49.069359 systemd-fsck[1066]: fsck.fat 4.2 (2021-01-31) Jul 2 07:55:49.069359 systemd-fsck[1066]: /dev/sda1: 789 files, 119238/258078 clusters Jul 2 07:55:49.072056 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Jul 2 07:55:49.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.083873 systemd[1]: Mounting boot.mount... Jul 2 07:55:49.130714 systemd[1]: Mounted boot.mount. Jul 2 07:55:49.155230 systemd[1]: Finished systemd-boot-update.service. Jul 2 07:55:49.162000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.466515 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:55:49.467244 systemd[1]: Finished systemd-machine-id-commit.service. Jul 2 07:55:49.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.511931 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:55:49.523049 ldconfig[1055]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:55:49.531233 systemd[1]: Finished ldconfig.service. Jul 2 07:55:49.540951 kernel: loop1: detected capacity change from 0 to 210664 Jul 2 07:55:49.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ldconfig comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.567300 (sd-sysext)[1071]: Using extensions 'kubernetes'. Jul 2 07:55:49.567989 (sd-sysext)[1071]: Merged extensions into '/usr'. Jul 2 07:55:49.590396 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:49.592502 systemd[1]: Mounting usr-share-oem.mount... Jul 2 07:55:49.600351 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:55:49.602731 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:55:49.611974 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:55:49.620987 systemd[1]: Starting modprobe@loop.service... Jul 2 07:55:49.629177 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:55:49.629455 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:49.629677 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:49.634522 systemd[1]: Mounted usr-share-oem.mount. Jul 2 07:55:49.642663 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:55:49.642945 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:55:49.649000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.649000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.652068 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:55:49.652302 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:55:49.659000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.659000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.661785 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:55:49.662032 systemd[1]: Finished modprobe@loop.service. Jul 2 07:55:49.668000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.670984 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:55:49.671197 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:55:49.672816 systemd[1]: Finished systemd-sysext.service. Jul 2 07:55:49.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:49.682805 systemd[1]: Starting ensure-sysext.service... Jul 2 07:55:49.691527 systemd[1]: Starting systemd-tmpfiles-setup.service... Jul 2 07:55:49.703633 systemd[1]: Reloading. Jul 2 07:55:49.723694 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:55:49.732101 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:55:49.747742 systemd-tmpfiles[1078]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:55:49.825611 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:55:49Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:55:49.825676 /usr/lib/systemd/system-generators/torcx-generator[1097]: time="2024-07-02T07:55:49Z" level=info msg="torcx already run" Jul 2 07:55:49.859098 systemd-networkd[1029]: eth0: Gained IPv6LL Jul 2 07:55:49.973701 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:55:49.974114 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:55:50.017058 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:55:50.093000 audit: BPF prog-id=30 op=LOAD Jul 2 07:55:50.094000 audit: BPF prog-id=27 op=UNLOAD Jul 2 07:55:50.094000 audit: BPF prog-id=31 op=LOAD Jul 2 07:55:50.094000 audit: BPF prog-id=32 op=LOAD Jul 2 07:55:50.094000 audit: BPF prog-id=28 op=UNLOAD Jul 2 07:55:50.094000 audit: BPF prog-id=29 op=UNLOAD Jul 2 07:55:50.095000 audit: BPF prog-id=33 op=LOAD Jul 2 07:55:50.095000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:55:50.095000 audit: BPF prog-id=34 op=LOAD Jul 2 07:55:50.095000 audit: BPF prog-id=35 op=LOAD Jul 2 07:55:50.095000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:55:50.095000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:55:50.096000 audit: BPF prog-id=36 op=LOAD Jul 2 07:55:50.096000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:55:50.099000 audit: BPF prog-id=37 op=LOAD Jul 2 07:55:50.099000 audit: BPF prog-id=38 op=LOAD Jul 2 07:55:50.099000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:55:50.099000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:55:50.109000 systemd[1]: Finished systemd-tmpfiles-setup.service. Jul 2 07:55:50.116000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:50.124193 systemd[1]: Starting audit-rules.service... Jul 2 07:55:50.133873 systemd[1]: Starting clean-ca-certificates.service... Jul 2 07:55:50.144153 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:55:50.155498 systemd[1]: Starting systemd-journal-catalog-update.service... Jul 2 07:55:50.164000 audit: BPF prog-id=39 op=LOAD Jul 2 07:55:50.167546 systemd[1]: Starting systemd-resolved.service... Jul 2 07:55:50.174000 audit: BPF prog-id=40 op=LOAD Jul 2 07:55:50.178194 systemd[1]: Starting systemd-timesyncd.service... Jul 2 07:55:50.187462 systemd[1]: Starting systemd-update-utmp.service... Jul 2 07:55:50.195000 audit[1167]: SYSTEM_BOOT pid=1167 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:55:50.198074 systemd[1]: Finished clean-ca-certificates.service. Jul 2 07:55:50.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:55:50.206735 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:55:50.206943 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:55:50.209000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:55:50.209000 audit[1172]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffce26ac580 a2=420 a3=0 items=0 ppid=1142 pid=1172 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:55:50.209000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:55:50.211699 augenrules[1172]: No rules Jul 2 07:55:50.216599 systemd[1]: Finished audit-rules.service. Jul 2 07:55:50.224511 systemd[1]: Finished systemd-journal-catalog-update.service. Jul 2 07:55:50.240508 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:50.242274 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.246533 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:55:50.257280 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:55:50.266302 systemd[1]: Starting modprobe@loop.service... Jul 2 07:55:50.275377 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:55:50.284119 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.284508 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:50.287073 enable-oslogin[1180]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:55:50.287852 systemd[1]: Starting systemd-update-done.service... Jul 2 07:55:50.295043 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:55:50.295370 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:50.298930 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:55:50.299162 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:55:50.308780 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:55:50.308999 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:55:50.318741 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:55:50.318976 systemd[1]: Finished modprobe@loop.service. Jul 2 07:55:50.326491 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:55:50.326769 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:55:50.337069 systemd[1]: Finished systemd-update-done.service. Jul 2 07:55:50.346494 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:55:50.346811 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.352560 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:50.354127 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.358222 systemd[1]: Starting modprobe@dm_mod.service... Jul 2 07:55:50.367192 systemd[1]: Starting modprobe@efi_pstore.service... Jul 2 07:55:50.369143 systemd-resolved[1158]: Positive Trust Anchors: Jul 2 07:55:50.369167 systemd-resolved[1158]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:55:50.369231 systemd-resolved[1158]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:55:50.376229 systemd[1]: Starting modprobe@loop.service... Jul 2 07:55:50.383616 systemd-resolved[1158]: Defaulting to hostname 'linux'. Jul 2 07:55:50.385237 systemd[1]: Starting oem-gce-enable-oslogin.service... Jul 2 07:55:50.391378 enable-oslogin[1185]: /etc/pam.d/sshd already exists. Not enabling OS Login Jul 2 07:55:50.394103 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.394358 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:50.394569 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:55:50.394737 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:55:50.396587 systemd[1]: Started systemd-resolved.service. Jul 2 07:55:50.405803 systemd[1]: Finished systemd-update-utmp.service. Jul 2 07:55:50.414752 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:55:50.415018 systemd[1]: Finished modprobe@dm_mod.service. Jul 2 07:55:50.424573 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:55:50.424724 systemd[1]: Finished modprobe@efi_pstore.service. Jul 2 07:55:50.433530 systemd[1]: Started systemd-timesyncd.service. Jul 2 07:55:50.433785 systemd-timesyncd[1164]: Contacted time server 169.254.169.254:123 (169.254.169.254). Jul 2 07:55:50.433865 systemd-timesyncd[1164]: Initial clock synchronization to Tue 2024-07-02 07:55:50.103445 UTC. Jul 2 07:55:50.443139 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:55:50.443393 systemd[1]: Finished modprobe@loop.service. Jul 2 07:55:50.452748 systemd[1]: oem-gce-enable-oslogin.service: Deactivated successfully. Jul 2 07:55:50.453006 systemd[1]: Finished oem-gce-enable-oslogin.service. Jul 2 07:55:50.461940 systemd[1]: Reached target network.target. Jul 2 07:55:50.470246 systemd[1]: Reached target nss-lookup.target. Jul 2 07:55:50.479242 systemd[1]: Reached target time-set.target. Jul 2 07:55:50.488198 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:55:50.488450 systemd[1]: Reached target sysinit.target. Jul 2 07:55:50.497425 systemd[1]: Started motdgen.path. Jul 2 07:55:50.504333 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Jul 2 07:55:50.514514 systemd[1]: Started logrotate.timer. Jul 2 07:55:50.522453 systemd[1]: Started mdadm.timer. Jul 2 07:55:50.529299 systemd[1]: Started systemd-tmpfiles-clean.timer. Jul 2 07:55:50.538213 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:55:50.538412 systemd[1]: Reached target paths.target. Jul 2 07:55:50.546270 systemd[1]: Reached target timers.target. Jul 2 07:55:50.553794 systemd[1]: Listening on dbus.socket. Jul 2 07:55:50.562950 systemd[1]: Starting docker.socket... Jul 2 07:55:50.574586 systemd[1]: Listening on sshd.socket. Jul 2 07:55:50.582331 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:50.582601 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.584933 systemd[1]: Listening on docker.socket. Jul 2 07:55:50.594734 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:55:50.594944 systemd[1]: Reached target sockets.target. Jul 2 07:55:50.604222 systemd[1]: Reached target basic.target. Jul 2 07:55:50.611237 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.611509 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Jul 2 07:55:50.613646 systemd[1]: Starting containerd.service... Jul 2 07:55:50.623186 systemd[1]: Starting coreos-metadata-sshkeys@core.service... Jul 2 07:55:50.635126 systemd[1]: Starting dbus.service... Jul 2 07:55:50.643188 systemd[1]: Starting enable-oem-cloudinit.service... Jul 2 07:55:50.652310 systemd[1]: Starting extend-filesystems.service... Jul 2 07:55:50.660751 jq[1192]: false Jul 2 07:55:50.660083 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Jul 2 07:55:50.662567 systemd[1]: Starting modprobe@drm.service... Jul 2 07:55:50.672324 systemd[1]: Starting motdgen.service... Jul 2 07:55:50.681130 systemd[1]: Starting prepare-helm.service... Jul 2 07:55:50.691403 systemd[1]: Starting ssh-key-proc-cmdline.service... Jul 2 07:55:50.701363 systemd[1]: Starting sshd-keygen.service... Jul 2 07:55:50.708716 extend-filesystems[1193]: Found loop1 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda1 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda2 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda3 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found usr Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda4 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda6 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda7 Jul 2 07:55:50.723799 extend-filesystems[1193]: Found sda9 Jul 2 07:55:50.723799 extend-filesystems[1193]: Checking size of /dev/sda9 Jul 2 07:55:51.016259 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 2538491 blocks Jul 2 07:55:51.016327 kernel: EXT4-fs (sda9): resized filesystem to 2538491 Jul 2 07:55:50.711581 systemd[1]: Starting systemd-networkd-wait-online.service... Jul 2 07:55:50.795580 dbus-daemon[1191]: [system] SELinux support is enabled Jul 2 07:55:51.017332 extend-filesystems[1193]: Resized partition /dev/sda9 Jul 2 07:55:50.732281 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:55:50.798623 dbus-daemon[1191]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1029 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jul 2 07:55:51.056999 extend-filesystems[1222]: resize2fs 1.46.5 (30-Dec-2021) Jul 2 07:55:51.056999 extend-filesystems[1222]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jul 2 07:55:51.056999 extend-filesystems[1222]: old_desc_blocks = 1, new_desc_blocks = 2 Jul 2 07:55:51.056999 extend-filesystems[1222]: The filesystem on /dev/sda9 is now 2538491 (4k) blocks long. Jul 2 07:55:50.732577 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionSecurity=!tpm2). Jul 2 07:55:51.098144 update_engine[1215]: I0702 07:55:50.877589 1215 main.cc:92] Flatcar Update Engine starting Jul 2 07:55:51.098144 update_engine[1215]: I0702 07:55:50.884624 1215 update_check_scheduler.cc:74] Next update check in 4m56s Jul 2 07:55:50.851817 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:55:51.099047 extend-filesystems[1193]: Resized filesystem in /dev/sda9 Jul 2 07:55:50.733477 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:55:51.110154 jq[1217]: true Jul 2 07:55:50.735353 systemd[1]: Starting update-engine.service... Jul 2 07:55:50.744617 systemd[1]: Starting update-ssh-keys-after-ignition.service... Jul 2 07:55:51.111817 tar[1223]: linux-amd64/helm Jul 2 07:55:50.760483 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:55:51.114521 jq[1226]: true Jul 2 07:55:50.761108 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Jul 2 07:55:51.115280 env[1227]: time="2024-07-02T07:55:50.956979822Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Jul 2 07:55:50.762436 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:55:50.762905 systemd[1]: Finished modprobe@drm.service. Jul 2 07:55:50.780304 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:55:50.780544 systemd[1]: Finished motdgen.service. Jul 2 07:55:50.799825 systemd[1]: Started dbus.service. Jul 2 07:55:50.817650 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:55:50.817961 systemd[1]: Finished ssh-key-proc-cmdline.service. Jul 2 07:55:51.134864 bash[1262]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:55:51.135061 mkfs.ext4[1242]: mke2fs 1.46.5 (30-Dec-2021) Jul 2 07:55:51.135061 mkfs.ext4[1242]: Discarding device blocks: 0/262144\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008 \u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008\u0008done Jul 2 07:55:51.135061 mkfs.ext4[1242]: Creating filesystem with 262144 4k blocks and 65536 inodes Jul 2 07:55:51.135061 mkfs.ext4[1242]: Filesystem UUID: 3160d7a2-886a-49b2-ac67-1834547d8085 Jul 2 07:55:51.135061 mkfs.ext4[1242]: Superblock backups stored on blocks: Jul 2 07:55:51.135061 mkfs.ext4[1242]: 32768, 98304, 163840, 229376 Jul 2 07:55:51.135061 mkfs.ext4[1242]: Allocating group tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:55:51.135061 mkfs.ext4[1242]: Writing inode tables: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:55:51.135061 mkfs.ext4[1242]: Creating journal (8192 blocks): done Jul 2 07:55:51.135061 mkfs.ext4[1242]: Writing superblocks and filesystem accounting information: 0/8\u0008\u0008\u0008 \u0008\u0008\u0008done Jul 2 07:55:50.835143 systemd[1]: Finished systemd-networkd-wait-online.service. Jul 2 07:55:50.850180 systemd[1]: Reached target network-online.target. Jul 2 07:55:50.861553 systemd[1]: Starting kubelet.service... Jul 2 07:55:50.921804 systemd[1]: Starting oem-gce.service... Jul 2 07:55:51.139928 umount[1257]: umount: /var/lib/flatcar-oem-gce.img: not mounted. Jul 2 07:55:50.933053 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:55:51.140395 env[1227]: time="2024-07-02T07:55:51.138643237Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:55:51.140395 env[1227]: time="2024-07-02T07:55:51.138834728Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:50.933135 systemd[1]: Reached target system-config.target. Jul 2 07:55:50.947275 systemd[1]: Starting systemd-logind.service... Jul 2 07:55:50.954145 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:55:50.954266 systemd[1]: Reached target user-config.target. Jul 2 07:55:50.964059 systemd[1]: Finished ensure-sysext.service. Jul 2 07:55:50.972602 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:55:50.972907 systemd[1]: Finished extend-filesystems.service. Jul 2 07:55:51.007013 systemd[1]: Started update-engine.service. Jul 2 07:55:51.027673 systemd[1]: Started locksmithd.service. Jul 2 07:55:51.039141 systemd[1]: Starting systemd-hostnamed.service... Jul 2 07:55:51.134146 systemd[1]: Finished update-ssh-keys-after-ignition.service. Jul 2 07:55:51.158426 env[1227]: time="2024-07-02T07:55:51.151409212Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.161-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:55:51.158426 env[1227]: time="2024-07-02T07:55:51.151464984Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:51.161446 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:55:51.161568 coreos-metadata[1190]: Jul 02 07:55:51.160 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/sshKeys: Attempt #1 Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162107936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162169534Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162193320Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162213037Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162434923Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163082 env[1227]: time="2024-07-02T07:55:51.162874011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163435 env[1227]: time="2024-07-02T07:55:51.163217491Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:55:51.163435 env[1227]: time="2024-07-02T07:55:51.163250759Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:55:51.163435 env[1227]: time="2024-07-02T07:55:51.163353826Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:55:51.163435 env[1227]: time="2024-07-02T07:55:51.163394310Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:55:51.173747 coreos-metadata[1190]: Jul 02 07:55:51.173 INFO Fetch failed with 404: resource not found Jul 2 07:55:51.174003 coreos-metadata[1190]: Jul 02 07:55:51.173 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/ssh-keys: Attempt #1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174713150Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174767723Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174790927Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174857282Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174952333Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.174980315Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.175002975Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.175027912Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.175051161Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.175071796Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175235 env[1227]: time="2024-07-02T07:55:51.175094108Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.175810 coreos-metadata[1190]: Jul 02 07:55:51.174 INFO Fetch successful Jul 2 07:55:51.175810 coreos-metadata[1190]: Jul 02 07:55:51.174 INFO Fetching http://169.254.169.254/computeMetadata/v1/instance/attributes/block-project-ssh-keys: Attempt #1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.175192167Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.177376172Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.177495501Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.177999715Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178042116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178071280Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178153397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178174575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178252089Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178273085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178292402Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178310958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178328233Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.179578 env[1227]: time="2024-07-02T07:55:51.178346080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178368941Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178530112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178552328Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178573389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178592132Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178618072Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178635999Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178661962Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:55:51.180233 env[1227]: time="2024-07-02T07:55:51.178710423Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:55:51.180594 env[1227]: time="2024-07-02T07:55:51.179061879Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:55:51.180594 env[1227]: time="2024-07-02T07:55:51.179145946Z" level=info msg="Connect containerd service" Jul 2 07:55:51.180594 env[1227]: time="2024-07-02T07:55:51.179191591Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:55:51.188251 kernel: EXT4-fs (loop2): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Jul 2 07:55:51.181858 systemd[1]: Started containerd.service. Jul 2 07:55:51.188417 coreos-metadata[1190]: Jul 02 07:55:51.185 INFO Fetch failed with 404: resource not found Jul 2 07:55:51.188417 coreos-metadata[1190]: Jul 02 07:55:51.185 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/sshKeys: Attempt #1 Jul 2 07:55:51.188536 env[1227]: time="2024-07-02T07:55:51.181268534Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:55:51.188536 env[1227]: time="2024-07-02T07:55:51.181620248Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:55:51.188536 env[1227]: time="2024-07-02T07:55:51.181681571Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:55:51.188536 env[1227]: time="2024-07-02T07:55:51.181920893Z" level=info msg="containerd successfully booted in 0.226730s" Jul 2 07:55:51.191189 coreos-metadata[1190]: Jul 02 07:55:51.191 INFO Fetch failed with 404: resource not found Jul 2 07:55:51.191413 coreos-metadata[1190]: Jul 02 07:55:51.191 INFO Fetching http://169.254.169.254/computeMetadata/v1/project/attributes/ssh-keys: Attempt #1 Jul 2 07:55:51.191649 env[1227]: time="2024-07-02T07:55:51.191539941Z" level=info msg="Start subscribing containerd event" Jul 2 07:55:51.191790 env[1227]: time="2024-07-02T07:55:51.191770416Z" level=info msg="Start recovering state" Jul 2 07:55:51.192021 env[1227]: time="2024-07-02T07:55:51.191997050Z" level=info msg="Start event monitor" Jul 2 07:55:51.192160 env[1227]: time="2024-07-02T07:55:51.192116990Z" level=info msg="Start snapshots syncer" Jul 2 07:55:51.192312 env[1227]: time="2024-07-02T07:55:51.192284593Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:55:51.192413 env[1227]: time="2024-07-02T07:55:51.192393264Z" level=info msg="Start streaming server" Jul 2 07:55:51.194639 coreos-metadata[1190]: Jul 02 07:55:51.194 INFO Fetch successful Jul 2 07:55:51.198313 unknown[1190]: wrote ssh authorized keys file for user: core Jul 2 07:55:51.254567 update-ssh-keys[1268]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:55:51.255123 systemd[1]: Finished coreos-metadata-sshkeys@core.service. Jul 2 07:55:51.316751 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.hostname1' Jul 2 07:55:51.316999 systemd[1]: Started systemd-hostnamed.service. Jul 2 07:55:51.318071 dbus-daemon[1191]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.5' (uid=0 pid=1251 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jul 2 07:55:51.324930 systemd-logind[1236]: Watching system buttons on /dev/input/event2 (Power Button) Jul 2 07:55:51.324973 systemd-logind[1236]: Watching system buttons on /dev/input/event3 (Sleep Button) Jul 2 07:55:51.325001 systemd-logind[1236]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:55:51.339972 systemd-logind[1236]: New seat seat0. Jul 2 07:55:51.343689 systemd[1]: Starting polkit.service... Jul 2 07:55:51.351863 systemd[1]: Started systemd-logind.service. Jul 2 07:55:51.456520 polkitd[1270]: Started polkitd version 121 Jul 2 07:55:51.513687 polkitd[1270]: Loading rules from directory /etc/polkit-1/rules.d Jul 2 07:55:51.513777 polkitd[1270]: Loading rules from directory /usr/share/polkit-1/rules.d Jul 2 07:55:51.523009 polkitd[1270]: Finished loading, compiling and executing 2 rules Jul 2 07:55:51.523680 dbus-daemon[1191]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jul 2 07:55:51.523913 systemd[1]: Started polkit.service. Jul 2 07:55:51.525001 polkitd[1270]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jul 2 07:55:51.569445 systemd-hostnamed[1251]: Hostname set to (transient) Jul 2 07:55:51.572570 systemd-resolved[1158]: System hostname changed to 'ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal'. Jul 2 07:55:52.371983 tar[1223]: linux-amd64/LICENSE Jul 2 07:55:52.375255 tar[1223]: linux-amd64/README.md Jul 2 07:55:52.394772 systemd[1]: Finished prepare-helm.service. Jul 2 07:55:53.013489 systemd[1]: Started kubelet.service. Jul 2 07:55:53.541177 locksmithd[1248]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:55:54.173245 sshd_keygen[1216]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:55:54.216758 systemd[1]: Finished sshd-keygen.service. Jul 2 07:55:54.226733 systemd[1]: Starting issuegen.service... Jul 2 07:55:54.244421 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:55:54.244689 systemd[1]: Finished issuegen.service. Jul 2 07:55:54.255938 systemd[1]: Starting systemd-user-sessions.service... Jul 2 07:55:54.271363 systemd[1]: Finished systemd-user-sessions.service. Jul 2 07:55:54.282746 systemd[1]: Started getty@tty1.service. Jul 2 07:55:54.293242 systemd[1]: Started serial-getty@ttyS0.service. Jul 2 07:55:54.301495 systemd[1]: Reached target getty.target. Jul 2 07:55:54.357383 kubelet[1293]: E0702 07:55:54.357299 1293 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:55:54.360343 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:55:54.360579 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:55:54.361020 systemd[1]: kubelet.service: Consumed 1.441s CPU time. Jul 2 07:55:56.830049 systemd[1]: var-lib-flatcar\x2doem\x2dgce.mount: Deactivated successfully. Jul 2 07:55:58.782926 kernel: loop2: detected capacity change from 0 to 2097152 Jul 2 07:55:58.799323 systemd-nspawn[1315]: Spawning container oem-gce on /var/lib/flatcar-oem-gce.img. Jul 2 07:55:58.799323 systemd-nspawn[1315]: Press ^] three times within 1s to kill container. Jul 2 07:55:58.815914 kernel: EXT4-fs (loop2): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Jul 2 07:55:58.899033 systemd[1]: Started oem-gce.service. Jul 2 07:55:58.906640 systemd[1]: Reached target multi-user.target. Jul 2 07:55:58.918474 systemd[1]: Starting systemd-update-utmp-runlevel.service... Jul 2 07:55:58.932802 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:55:58.933102 systemd[1]: Finished systemd-update-utmp-runlevel.service. Jul 2 07:55:58.943284 systemd[1]: Startup finished in 1.036s (kernel) + 7.841s (initrd) + 16.281s (userspace) = 25.159s. Jul 2 07:55:58.960073 systemd-nspawn[1315]: + '[' -e /etc/default/instance_configs.cfg.template ']' Jul 2 07:55:58.960073 systemd-nspawn[1315]: + echo -e '[InstanceSetup]\nset_host_keys = false' Jul 2 07:55:58.960351 systemd-nspawn[1315]: + /usr/bin/google_instance_setup Jul 2 07:55:59.631725 instance-setup[1321]: INFO Running google_set_multiqueue. Jul 2 07:55:59.648692 instance-setup[1321]: INFO Set channels for eth0 to 2. Jul 2 07:55:59.652500 instance-setup[1321]: INFO Setting /proc/irq/31/smp_affinity_list to 0 for device virtio1. Jul 2 07:55:59.653856 instance-setup[1321]: INFO /proc/irq/31/smp_affinity_list: real affinity 0 Jul 2 07:55:59.654340 instance-setup[1321]: INFO Setting /proc/irq/32/smp_affinity_list to 0 for device virtio1. Jul 2 07:55:59.655770 instance-setup[1321]: INFO /proc/irq/32/smp_affinity_list: real affinity 0 Jul 2 07:55:59.656165 instance-setup[1321]: INFO Setting /proc/irq/33/smp_affinity_list to 1 for device virtio1. Jul 2 07:55:59.657486 instance-setup[1321]: INFO /proc/irq/33/smp_affinity_list: real affinity 1 Jul 2 07:55:59.657935 instance-setup[1321]: INFO Setting /proc/irq/34/smp_affinity_list to 1 for device virtio1. Jul 2 07:55:59.659366 instance-setup[1321]: INFO /proc/irq/34/smp_affinity_list: real affinity 1 Jul 2 07:55:59.670603 instance-setup[1321]: INFO Queue 0 XPS=1 for /sys/class/net/eth0/queues/tx-0/xps_cpus Jul 2 07:55:59.671055 instance-setup[1321]: INFO Queue 1 XPS=2 for /sys/class/net/eth0/queues/tx-1/xps_cpus Jul 2 07:55:59.714176 systemd-nspawn[1315]: + /usr/bin/google_metadata_script_runner --script-type startup Jul 2 07:56:00.051282 startup-script[1352]: INFO Starting startup scripts. Jul 2 07:56:00.065092 startup-script[1352]: INFO No startup scripts found in metadata. Jul 2 07:56:00.065257 startup-script[1352]: INFO Finished running startup scripts. Jul 2 07:56:00.103670 systemd-nspawn[1315]: + trap 'stopping=1 ; kill "${daemon_pids[@]}" || :' SIGTERM Jul 2 07:56:00.103670 systemd-nspawn[1315]: + daemon_pids=() Jul 2 07:56:00.104410 systemd-nspawn[1315]: + for d in accounts clock_skew network Jul 2 07:56:00.104410 systemd-nspawn[1315]: + daemon_pids+=($!) Jul 2 07:56:00.104410 systemd-nspawn[1315]: + for d in accounts clock_skew network Jul 2 07:56:00.104410 systemd-nspawn[1315]: + daemon_pids+=($!) Jul 2 07:56:00.104410 systemd-nspawn[1315]: + for d in accounts clock_skew network Jul 2 07:56:00.104668 systemd-nspawn[1315]: + daemon_pids+=($!) Jul 2 07:56:00.104668 systemd-nspawn[1315]: + NOTIFY_SOCKET=/run/systemd/notify Jul 2 07:56:00.104668 systemd-nspawn[1315]: + /usr/bin/systemd-notify --ready Jul 2 07:56:00.105292 systemd-nspawn[1315]: + /usr/bin/google_clock_skew_daemon Jul 2 07:56:00.105439 systemd-nspawn[1315]: + /usr/bin/google_network_daemon Jul 2 07:56:00.105879 systemd-nspawn[1315]: + /usr/bin/google_accounts_daemon Jul 2 07:56:00.144277 systemd[1]: Created slice system-sshd.slice. Jul 2 07:56:00.148821 systemd[1]: Started sshd@0-10.128.0.47:22-147.75.109.163:54450.service. Jul 2 07:56:00.172142 systemd-nspawn[1315]: + wait -n 36 37 38 Jul 2 07:56:00.483077 sshd[1360]: Accepted publickey for core from 147.75.109.163 port 54450 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:56:00.486367 sshd[1360]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:00.505768 systemd[1]: Created slice user-500.slice. Jul 2 07:56:00.507818 systemd[1]: Starting user-runtime-dir@500.service... Jul 2 07:56:00.526332 systemd-logind[1236]: New session 1 of user core. Jul 2 07:56:00.535699 systemd[1]: Finished user-runtime-dir@500.service. Jul 2 07:56:00.540306 systemd[1]: Starting user@500.service... Jul 2 07:56:00.563204 (systemd)[1363]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:00.770388 systemd[1363]: Queued start job for default target default.target. Jul 2 07:56:00.771394 systemd[1363]: Reached target paths.target. Jul 2 07:56:00.771427 systemd[1363]: Reached target sockets.target. Jul 2 07:56:00.771450 systemd[1363]: Reached target timers.target. Jul 2 07:56:00.771471 systemd[1363]: Reached target basic.target. Jul 2 07:56:00.771554 systemd[1363]: Reached target default.target. Jul 2 07:56:00.771610 systemd[1363]: Startup finished in 190ms. Jul 2 07:56:00.771651 systemd[1]: Started user@500.service. Jul 2 07:56:00.773411 systemd[1]: Started session-1.scope. Jul 2 07:56:00.885422 google-networking[1357]: INFO Starting Google Networking daemon. Jul 2 07:56:00.999555 systemd[1]: Started sshd@1-10.128.0.47:22-147.75.109.163:54452.service. Jul 2 07:56:01.081698 groupadd[1381]: group added to /etc/group: name=google-sudoers, GID=1000 Jul 2 07:56:01.086607 groupadd[1381]: group added to /etc/gshadow: name=google-sudoers Jul 2 07:56:01.091274 groupadd[1381]: new group: name=google-sudoers, GID=1000 Jul 2 07:56:01.117282 google-accounts[1355]: INFO Starting Google Accounts daemon. Jul 2 07:56:01.144984 google-clock-skew[1356]: INFO Starting Google Clock Skew daemon. Jul 2 07:56:01.153756 google-accounts[1355]: WARNING OS Login not installed. Jul 2 07:56:01.155331 google-accounts[1355]: INFO Creating a new user account for 0. Jul 2 07:56:01.159261 google-clock-skew[1356]: INFO Clock drift token has changed: 0. Jul 2 07:56:01.163263 systemd-nspawn[1315]: hwclock: Cannot access the Hardware Clock via any known method. Jul 2 07:56:01.163263 systemd-nspawn[1315]: hwclock: Use the --verbose option to see the details of our search for an access method. Jul 2 07:56:01.164026 systemd-nspawn[1315]: useradd: invalid user name '0': use --badname to ignore Jul 2 07:56:01.164107 google-clock-skew[1356]: WARNING Failed to sync system time with hardware clock. Jul 2 07:56:01.165188 google-accounts[1355]: WARNING Could not create user 0. Command '['useradd', '-m', '-s', '/bin/bash', '-p', '*', '0']' returned non-zero exit status 3.. Jul 2 07:56:01.308920 sshd[1379]: Accepted publickey for core from 147.75.109.163 port 54452 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:56:01.310907 sshd[1379]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:01.317265 systemd-logind[1236]: New session 2 of user core. Jul 2 07:56:01.318067 systemd[1]: Started session-2.scope. Jul 2 07:56:01.521176 sshd[1379]: pam_unix(sshd:session): session closed for user core Jul 2 07:56:01.525982 systemd[1]: sshd@1-10.128.0.47:22-147.75.109.163:54452.service: Deactivated successfully. Jul 2 07:56:01.527099 systemd[1]: session-2.scope: Deactivated successfully. Jul 2 07:56:01.527962 systemd-logind[1236]: Session 2 logged out. Waiting for processes to exit. Jul 2 07:56:01.529360 systemd-logind[1236]: Removed session 2. Jul 2 07:56:01.567276 systemd[1]: Started sshd@2-10.128.0.47:22-147.75.109.163:54466.service. Jul 2 07:56:01.855942 sshd[1396]: Accepted publickey for core from 147.75.109.163 port 54466 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:56:01.858175 sshd[1396]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:01.864301 systemd-logind[1236]: New session 3 of user core. Jul 2 07:56:01.865119 systemd[1]: Started session-3.scope. Jul 2 07:56:02.064049 sshd[1396]: pam_unix(sshd:session): session closed for user core Jul 2 07:56:02.068534 systemd[1]: sshd@2-10.128.0.47:22-147.75.109.163:54466.service: Deactivated successfully. Jul 2 07:56:02.069631 systemd[1]: session-3.scope: Deactivated successfully. Jul 2 07:56:02.070469 systemd-logind[1236]: Session 3 logged out. Waiting for processes to exit. Jul 2 07:56:02.071714 systemd-logind[1236]: Removed session 3. Jul 2 07:56:02.108999 systemd[1]: Started sshd@3-10.128.0.47:22-147.75.109.163:54474.service. Jul 2 07:56:02.399133 sshd[1402]: Accepted publickey for core from 147.75.109.163 port 54474 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:56:02.401155 sshd[1402]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:02.407754 systemd-logind[1236]: New session 4 of user core. Jul 2 07:56:02.408562 systemd[1]: Started session-4.scope. Jul 2 07:56:02.611239 sshd[1402]: pam_unix(sshd:session): session closed for user core Jul 2 07:56:02.615072 systemd[1]: sshd@3-10.128.0.47:22-147.75.109.163:54474.service: Deactivated successfully. Jul 2 07:56:02.616122 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:56:02.617017 systemd-logind[1236]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:56:02.618286 systemd-logind[1236]: Removed session 4. Jul 2 07:56:02.657642 systemd[1]: Started sshd@4-10.128.0.47:22-147.75.109.163:40798.service. Jul 2 07:56:02.947682 sshd[1408]: Accepted publickey for core from 147.75.109.163 port 40798 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:56:02.949386 sshd[1408]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:56:02.956301 systemd[1]: Started session-5.scope. Jul 2 07:56:02.957156 systemd-logind[1236]: New session 5 of user core. Jul 2 07:56:03.144143 sudo[1411]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:56:03.144630 sudo[1411]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:56:03.177993 systemd[1]: Starting docker.service... Jul 2 07:56:03.231480 env[1421]: time="2024-07-02T07:56:03.230783491Z" level=info msg="Starting up" Jul 2 07:56:03.232560 env[1421]: time="2024-07-02T07:56:03.232531379Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:56:03.232693 env[1421]: time="2024-07-02T07:56:03.232676727Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:56:03.232786 env[1421]: time="2024-07-02T07:56:03.232768228Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:56:03.232853 env[1421]: time="2024-07-02T07:56:03.232840214Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:56:03.235529 env[1421]: time="2024-07-02T07:56:03.235502873Z" level=info msg="parsed scheme: \"unix\"" module=grpc Jul 2 07:56:03.235652 env[1421]: time="2024-07-02T07:56:03.235635731Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Jul 2 07:56:03.235749 env[1421]: time="2024-07-02T07:56:03.235723155Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Jul 2 07:56:03.235815 env[1421]: time="2024-07-02T07:56:03.235802211Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Jul 2 07:56:03.274029 env[1421]: time="2024-07-02T07:56:03.273981463Z" level=info msg="Loading containers: start." Jul 2 07:56:03.446926 kernel: Initializing XFRM netlink socket Jul 2 07:56:03.491782 env[1421]: time="2024-07-02T07:56:03.491641388Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Jul 2 07:56:03.572253 systemd-networkd[1029]: docker0: Link UP Jul 2 07:56:03.588448 env[1421]: time="2024-07-02T07:56:03.588380188Z" level=info msg="Loading containers: done." Jul 2 07:56:03.605606 env[1421]: time="2024-07-02T07:56:03.605522423Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:56:03.605921 env[1421]: time="2024-07-02T07:56:03.605867530Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Jul 2 07:56:03.606107 env[1421]: time="2024-07-02T07:56:03.606065140Z" level=info msg="Daemon has completed initialization" Jul 2 07:56:03.628260 systemd[1]: Started docker.service. Jul 2 07:56:03.641694 env[1421]: time="2024-07-02T07:56:03.641604016Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:56:04.612014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:56:04.612406 systemd[1]: Stopped kubelet.service. Jul 2 07:56:04.612488 systemd[1]: kubelet.service: Consumed 1.441s CPU time. Jul 2 07:56:04.615239 systemd[1]: Starting kubelet.service... Jul 2 07:56:04.741095 env[1227]: time="2024-07-02T07:56:04.741039782Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\"" Jul 2 07:56:04.850290 systemd[1]: Started kubelet.service. Jul 2 07:56:04.924564 kubelet[1555]: E0702 07:56:04.924412 1555 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:56:04.928957 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:56:04.929197 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:56:05.233090 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1155202293.mount: Deactivated successfully. Jul 2 07:56:07.225815 env[1227]: time="2024-07-02T07:56:07.225738272Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:07.228953 env[1227]: time="2024-07-02T07:56:07.228904069Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:07.231616 env[1227]: time="2024-07-02T07:56:07.231564982Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:07.234239 env[1227]: time="2024-07-02T07:56:07.234188075Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:340ab4a1d66a60630a7a298aa0b2576fcd82e51ecdddb751cf61e5d3846fde2d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:07.235279 env[1227]: time="2024-07-02T07:56:07.235221900Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.2\" returns image reference \"sha256:56ce0fd9fb532bcb552ddbdbe3064189ce823a71693d97ff7a0a7a4ff6bffbbe\"" Jul 2 07:56:07.250338 env[1227]: time="2024-07-02T07:56:07.250277050Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\"" Jul 2 07:56:09.207990 env[1227]: time="2024-07-02T07:56:09.207919748Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:09.210751 env[1227]: time="2024-07-02T07:56:09.210696272Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:09.213412 env[1227]: time="2024-07-02T07:56:09.213369255Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:09.215861 env[1227]: time="2024-07-02T07:56:09.215816570Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:4c412bc1fc585ddeba10d34a02e7507ea787ec2c57256d4c18fd230377ab048e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:09.216956 env[1227]: time="2024-07-02T07:56:09.216901802Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.2\" returns image reference \"sha256:e874818b3caac34f68704eb96bf248d0c8116b1262ab549d45d39dd3dd775974\"" Jul 2 07:56:09.231674 env[1227]: time="2024-07-02T07:56:09.231612634Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\"" Jul 2 07:56:10.465701 env[1227]: time="2024-07-02T07:56:10.465630313Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:10.468732 env[1227]: time="2024-07-02T07:56:10.468678245Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:10.471713 env[1227]: time="2024-07-02T07:56:10.471658351Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:10.474318 env[1227]: time="2024-07-02T07:56:10.474265394Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:0ed75a333704f5d315395c6ec04d7af7405715537069b65d40b43ec1c8e030bc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:10.475388 env[1227]: time="2024-07-02T07:56:10.475330833Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.2\" returns image reference \"sha256:7820c83aa139453522e9028341d0d4f23ca2721ec80c7a47425446d11157b940\"" Jul 2 07:56:10.490266 env[1227]: time="2024-07-02T07:56:10.490218668Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\"" Jul 2 07:56:11.681198 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount684849565.mount: Deactivated successfully. Jul 2 07:56:12.385262 env[1227]: time="2024-07-02T07:56:12.385193987Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:12.388280 env[1227]: time="2024-07-02T07:56:12.388220424Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:12.390940 env[1227]: time="2024-07-02T07:56:12.390865703Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.30.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:12.393541 env[1227]: time="2024-07-02T07:56:12.393490246Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:8a44c6e094af3dea3de57fa967e201608a358a3bd8b4e3f31ab905bbe4108aec,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:12.394459 env[1227]: time="2024-07-02T07:56:12.394340382Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.2\" returns image reference \"sha256:53c535741fb446f6b34d720fdc5748db368ef96771111f3892682e6eab8f3772\"" Jul 2 07:56:12.409837 env[1227]: time="2024-07-02T07:56:12.409773130Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:56:12.802575 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2172988552.mount: Deactivated successfully. Jul 2 07:56:13.975414 env[1227]: time="2024-07-02T07:56:13.975334339Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:13.978689 env[1227]: time="2024-07-02T07:56:13.978634289Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:13.981328 env[1227]: time="2024-07-02T07:56:13.981280248Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:13.984248 env[1227]: time="2024-07-02T07:56:13.984198470Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:13.986610 env[1227]: time="2024-07-02T07:56:13.986545143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:56:14.004214 env[1227]: time="2024-07-02T07:56:14.004142485Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:56:14.441533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1788733165.mount: Deactivated successfully. Jul 2 07:56:14.449413 env[1227]: time="2024-07-02T07:56:14.449348520Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:14.451756 env[1227]: time="2024-07-02T07:56:14.451698694Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:14.453950 env[1227]: time="2024-07-02T07:56:14.453909356Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:14.456572 env[1227]: time="2024-07-02T07:56:14.456530751Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:14.457206 env[1227]: time="2024-07-02T07:56:14.457147787Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:56:14.472617 env[1227]: time="2024-07-02T07:56:14.472567466Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" Jul 2 07:56:14.840466 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3716581634.mount: Deactivated successfully. Jul 2 07:56:15.180354 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:56:15.180666 systemd[1]: Stopped kubelet.service. Jul 2 07:56:15.183171 systemd[1]: Starting kubelet.service... Jul 2 07:56:15.375726 systemd[1]: Started kubelet.service. Jul 2 07:56:15.511417 kubelet[1598]: E0702 07:56:15.511280 1598 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:56:15.514286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:56:15.514497 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:56:17.636636 env[1227]: time="2024-07-02T07:56:17.636560700Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:17.639551 env[1227]: time="2024-07-02T07:56:17.639501119Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:17.642119 env[1227]: time="2024-07-02T07:56:17.642073325Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.12-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:17.644758 env[1227]: time="2024-07-02T07:56:17.644687143Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:17.645788 env[1227]: time="2024-07-02T07:56:17.645738623Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" Jul 2 07:56:21.279604 systemd[1]: Stopped kubelet.service. Jul 2 07:56:21.283090 systemd[1]: Starting kubelet.service... Jul 2 07:56:21.319829 systemd[1]: Reloading. Jul 2 07:56:21.423949 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2024-07-02T07:56:21Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:56:21.424004 /usr/lib/systemd/system-generators/torcx-generator[1689]: time="2024-07-02T07:56:21Z" level=info msg="torcx already run" Jul 2 07:56:21.582459 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:56:21.582488 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:56:21.607648 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:56:21.747453 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jul 2 07:56:21.762138 systemd[1]: Started kubelet.service. Jul 2 07:56:21.770196 systemd[1]: Stopping kubelet.service... Jul 2 07:56:21.770796 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:56:21.771145 systemd[1]: Stopped kubelet.service. Jul 2 07:56:21.773603 systemd[1]: Starting kubelet.service... Jul 2 07:56:21.966396 systemd[1]: Started kubelet.service. Jul 2 07:56:22.028037 kubelet[1742]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:56:22.028037 kubelet[1742]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:56:22.028037 kubelet[1742]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:56:22.031660 kubelet[1742]: I0702 07:56:22.031577 1742 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:56:22.525760 kubelet[1742]: I0702 07:56:22.525713 1742 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:56:22.526014 kubelet[1742]: I0702 07:56:22.525994 1742 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:56:22.526644 kubelet[1742]: I0702 07:56:22.526617 1742 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:56:22.556397 kubelet[1742]: E0702 07:56:22.556343 1742 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.128.0.47:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.558100 kubelet[1742]: I0702 07:56:22.558064 1742 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:56:22.584276 kubelet[1742]: I0702 07:56:22.584211 1742 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:56:22.584623 kubelet[1742]: I0702 07:56:22.584571 1742 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:56:22.584904 kubelet[1742]: I0702 07:56:22.584610 1742 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:56:22.585102 kubelet[1742]: I0702 07:56:22.584917 1742 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:56:22.585102 kubelet[1742]: I0702 07:56:22.584937 1742 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:56:22.585232 kubelet[1742]: I0702 07:56:22.585122 1742 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:56:22.586699 kubelet[1742]: I0702 07:56:22.586669 1742 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:56:22.586699 kubelet[1742]: I0702 07:56:22.586702 1742 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:56:22.586905 kubelet[1742]: I0702 07:56:22.586741 1742 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:56:22.586905 kubelet[1742]: I0702 07:56:22.586768 1742 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:56:22.603373 kubelet[1742]: W0702 07:56:22.600568 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.603591 kubelet[1742]: E0702 07:56:22.603384 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.603591 kubelet[1742]: W0702 07:56:22.603528 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.603591 kubelet[1742]: E0702 07:56:22.603584 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.603801 kubelet[1742]: I0702 07:56:22.603747 1742 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:56:22.606741 kubelet[1742]: I0702 07:56:22.606700 1742 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:56:22.606921 kubelet[1742]: W0702 07:56:22.606798 1742 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:56:22.608127 kubelet[1742]: I0702 07:56:22.607967 1742 server.go:1264] "Started kubelet" Jul 2 07:56:22.608696 kubelet[1742]: I0702 07:56:22.608633 1742 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:56:22.610082 kubelet[1742]: I0702 07:56:22.609987 1742 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:56:22.620584 kubelet[1742]: I0702 07:56:22.620508 1742 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:56:22.621047 kubelet[1742]: I0702 07:56:22.621026 1742 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:56:22.621501 kubelet[1742]: E0702 07:56:22.621355 1742 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.128.0.47:6443/api/v1/namespaces/default/events\": dial tcp 10.128.0.47:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal.17de5649c90f261e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,UID:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,},FirstTimestamp:2024-07-02 07:56:22.607922718 +0000 UTC m=+0.634566052,LastTimestamp:2024-07-02 07:56:22.607922718 +0000 UTC m=+0.634566052,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,}" Jul 2 07:56:22.628619 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Jul 2 07:56:22.628945 kubelet[1742]: I0702 07:56:22.628899 1742 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:56:22.636386 kubelet[1742]: E0702 07:56:22.636348 1742 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" not found" Jul 2 07:56:22.637047 kubelet[1742]: I0702 07:56:22.636435 1742 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:56:22.637047 kubelet[1742]: I0702 07:56:22.636592 1742 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:56:22.639638 kubelet[1742]: W0702 07:56:22.637856 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.639638 kubelet[1742]: E0702 07:56:22.637949 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.128.0.47:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.639638 kubelet[1742]: E0702 07:56:22.638078 1742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="200ms" Jul 2 07:56:22.639638 kubelet[1742]: I0702 07:56:22.638366 1742 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:56:22.639638 kubelet[1742]: I0702 07:56:22.638481 1742 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:56:22.641392 kubelet[1742]: E0702 07:56:22.641347 1742 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:56:22.641873 kubelet[1742]: I0702 07:56:22.641839 1742 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:56:22.643089 kubelet[1742]: I0702 07:56:22.643063 1742 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:56:22.677501 kubelet[1742]: I0702 07:56:22.677476 1742 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:56:22.677788 kubelet[1742]: I0702 07:56:22.677767 1742 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:56:22.678120 kubelet[1742]: I0702 07:56:22.678102 1742 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:56:22.679841 kubelet[1742]: I0702 07:56:22.679792 1742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:56:22.681386 kubelet[1742]: I0702 07:56:22.681349 1742 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:56:22.681386 kubelet[1742]: I0702 07:56:22.681376 1742 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:56:22.681564 kubelet[1742]: I0702 07:56:22.681402 1742 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:56:22.681564 kubelet[1742]: E0702 07:56:22.681499 1742 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:56:22.681827 kubelet[1742]: I0702 07:56:22.681808 1742 policy_none.go:49] "None policy: Start" Jul 2 07:56:22.687696 kubelet[1742]: W0702 07:56:22.687631 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.687813 kubelet[1742]: E0702 07:56:22.687712 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:22.688495 kubelet[1742]: I0702 07:56:22.688474 1742 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:56:22.688602 kubelet[1742]: I0702 07:56:22.688508 1742 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:56:22.696105 systemd[1]: Created slice kubepods.slice. Jul 2 07:56:22.702794 systemd[1]: Created slice kubepods-burstable.slice. Jul 2 07:56:22.706966 systemd[1]: Created slice kubepods-besteffort.slice. Jul 2 07:56:22.717115 kubelet[1742]: I0702 07:56:22.717085 1742 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:56:22.717751 kubelet[1742]: I0702 07:56:22.717694 1742 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:56:22.717954 kubelet[1742]: I0702 07:56:22.717935 1742 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:56:22.720993 kubelet[1742]: E0702 07:56:22.720201 1742 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" not found" Jul 2 07:56:22.744436 kubelet[1742]: I0702 07:56:22.744377 1742 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.744864 kubelet[1742]: E0702 07:56:22.744827 1742 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.782526 kubelet[1742]: I0702 07:56:22.782140 1742 topology_manager.go:215] "Topology Admit Handler" podUID="b67e07c667a65fb7f141ca1b7fc71f8f" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.788407 kubelet[1742]: I0702 07:56:22.788364 1742 topology_manager.go:215] "Topology Admit Handler" podUID="063fcc115552124588780c861080782a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.793182 kubelet[1742]: I0702 07:56:22.793143 1742 topology_manager.go:215] "Topology Admit Handler" podUID="06cf132f01b678ae867651e96fc35319" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.800136 systemd[1]: Created slice kubepods-burstable-podb67e07c667a65fb7f141ca1b7fc71f8f.slice. Jul 2 07:56:22.814695 systemd[1]: Created slice kubepods-burstable-pod063fcc115552124588780c861080782a.slice. Jul 2 07:56:22.821284 systemd[1]: Created slice kubepods-burstable-pod06cf132f01b678ae867651e96fc35319.slice. Jul 2 07:56:22.839383 kubelet[1742]: E0702 07:56:22.839316 1742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="400ms" Jul 2 07:56:22.843820 kubelet[1742]: I0702 07:56:22.843743 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844039 kubelet[1742]: I0702 07:56:22.843972 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844039 kubelet[1742]: I0702 07:56:22.844018 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06cf132f01b678ae867651e96fc35319-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"06cf132f01b678ae867651e96fc35319\") " pod="kube-system/kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844175 kubelet[1742]: I0702 07:56:22.844048 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844175 kubelet[1742]: I0702 07:56:22.844081 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844175 kubelet[1742]: I0702 07:56:22.844113 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844175 kubelet[1742]: I0702 07:56:22.844144 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844395 kubelet[1742]: I0702 07:56:22.844180 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.844395 kubelet[1742]: I0702 07:56:22.844212 1742 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.950581 kubelet[1742]: I0702 07:56:22.950544 1742 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:22.951116 kubelet[1742]: E0702 07:56:22.951077 1742 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:23.111951 env[1227]: time="2024-07-02T07:56:23.111786234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:b67e07c667a65fb7f141ca1b7fc71f8f,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:23.119144 env[1227]: time="2024-07-02T07:56:23.118755610Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:063fcc115552124588780c861080782a,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:23.124981 env[1227]: time="2024-07-02T07:56:23.124929727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:06cf132f01b678ae867651e96fc35319,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:23.241004 kubelet[1742]: E0702 07:56:23.240925 1742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="800ms" Jul 2 07:56:23.357389 kubelet[1742]: I0702 07:56:23.357350 1742 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:23.357824 kubelet[1742]: E0702 07:56:23.357773 1742 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.128.0.47:6443/api/v1/nodes\": dial tcp 10.128.0.47:6443: connect: connection refused" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:23.478443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4213955814.mount: Deactivated successfully. Jul 2 07:56:23.489925 env[1227]: time="2024-07-02T07:56:23.489857901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.491280 env[1227]: time="2024-07-02T07:56:23.491227705Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.495122 env[1227]: time="2024-07-02T07:56:23.495079209Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.496688 env[1227]: time="2024-07-02T07:56:23.496648201Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.502623 env[1227]: time="2024-07-02T07:56:23.502376553Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.503836 env[1227]: time="2024-07-02T07:56:23.503767609Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.504854 env[1227]: time="2024-07-02T07:56:23.504816420Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.507064 env[1227]: time="2024-07-02T07:56:23.507010331Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.507966 env[1227]: time="2024-07-02T07:56:23.507923935Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.508853 env[1227]: time="2024-07-02T07:56:23.508820030Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.513358 env[1227]: time="2024-07-02T07:56:23.513297058Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.517793 env[1227]: time="2024-07-02T07:56:23.517743701Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:23.559998 env[1227]: time="2024-07-02T07:56:23.559667733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:23.559998 env[1227]: time="2024-07-02T07:56:23.559734152Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:23.559998 env[1227]: time="2024-07-02T07:56:23.559772087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:23.560495 env[1227]: time="2024-07-02T07:56:23.560425275Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8e62eccb591a65cbe120b20eb5e49b93a4c6606975b52a649fef1d55baded9a3 pid=1780 runtime=io.containerd.runc.v2 Jul 2 07:56:23.592642 env[1227]: time="2024-07-02T07:56:23.592533378Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:23.592642 env[1227]: time="2024-07-02T07:56:23.592581040Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:23.593024 env[1227]: time="2024-07-02T07:56:23.592607460Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:23.594028 env[1227]: time="2024-07-02T07:56:23.593927162Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b2ecdd4ec7c2a040f13342554ac73dc7b8dc004a5da50d18a823d264de13c08b pid=1807 runtime=io.containerd.runc.v2 Jul 2 07:56:23.598178 env[1227]: time="2024-07-02T07:56:23.598058055Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:23.598509 env[1227]: time="2024-07-02T07:56:23.598427389Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:23.598716 env[1227]: time="2024-07-02T07:56:23.598668107Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:23.599469 env[1227]: time="2024-07-02T07:56:23.599396371Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2782c6de0a47ce4b25d4d85a140910dfdf4de867b959897ab260041a5478b729 pid=1813 runtime=io.containerd.runc.v2 Jul 2 07:56:23.605990 systemd[1]: Started cri-containerd-8e62eccb591a65cbe120b20eb5e49b93a4c6606975b52a649fef1d55baded9a3.scope. Jul 2 07:56:23.639098 systemd[1]: Started cri-containerd-b2ecdd4ec7c2a040f13342554ac73dc7b8dc004a5da50d18a823d264de13c08b.scope. Jul 2 07:56:23.680728 systemd[1]: Started cri-containerd-2782c6de0a47ce4b25d4d85a140910dfdf4de867b959897ab260041a5478b729.scope. Jul 2 07:56:23.751734 env[1227]: time="2024-07-02T07:56:23.750475256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:063fcc115552124588780c861080782a,Namespace:kube-system,Attempt:0,} returns sandbox id \"8e62eccb591a65cbe120b20eb5e49b93a4c6606975b52a649fef1d55baded9a3\"" Jul 2 07:56:23.754406 kubelet[1742]: E0702 07:56:23.753765 1742 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flat" Jul 2 07:56:23.756959 env[1227]: time="2024-07-02T07:56:23.756852196Z" level=info msg="CreateContainer within sandbox \"8e62eccb591a65cbe120b20eb5e49b93a4c6606975b52a649fef1d55baded9a3\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:56:23.772194 env[1227]: time="2024-07-02T07:56:23.772130978Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:b67e07c667a65fb7f141ca1b7fc71f8f,Namespace:kube-system,Attempt:0,} returns sandbox id \"b2ecdd4ec7c2a040f13342554ac73dc7b8dc004a5da50d18a823d264de13c08b\"" Jul 2 07:56:23.778526 kubelet[1742]: E0702 07:56:23.778001 1742 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-21291" Jul 2 07:56:23.780030 env[1227]: time="2024-07-02T07:56:23.779956728Z" level=info msg="CreateContainer within sandbox \"b2ecdd4ec7c2a040f13342554ac73dc7b8dc004a5da50d18a823d264de13c08b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:56:23.785068 env[1227]: time="2024-07-02T07:56:23.785017544Z" level=info msg="CreateContainer within sandbox \"8e62eccb591a65cbe120b20eb5e49b93a4c6606975b52a649fef1d55baded9a3\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c8bb6f59807ba4854d02740687372c917a39e2ecc47aba44b3b8e06b395a24f9\"" Jul 2 07:56:23.796358 env[1227]: time="2024-07-02T07:56:23.796304041Z" level=info msg="StartContainer for \"c8bb6f59807ba4854d02740687372c917a39e2ecc47aba44b3b8e06b395a24f9\"" Jul 2 07:56:23.804403 env[1227]: time="2024-07-02T07:56:23.804321297Z" level=info msg="CreateContainer within sandbox \"b2ecdd4ec7c2a040f13342554ac73dc7b8dc004a5da50d18a823d264de13c08b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a78ca89389b7d29f8deae314fea1f2bf4cebe13116d0ac43c3e59ba4f6b9d3b8\"" Jul 2 07:56:23.805564 env[1227]: time="2024-07-02T07:56:23.805526167Z" level=info msg="StartContainer for \"a78ca89389b7d29f8deae314fea1f2bf4cebe13116d0ac43c3e59ba4f6b9d3b8\"" Jul 2 07:56:23.829331 env[1227]: time="2024-07-02T07:56:23.828089955Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal,Uid:06cf132f01b678ae867651e96fc35319,Namespace:kube-system,Attempt:0,} returns sandbox id \"2782c6de0a47ce4b25d4d85a140910dfdf4de867b959897ab260041a5478b729\"" Jul 2 07:56:23.832180 kubelet[1742]: E0702 07:56:23.831660 1742 kubelet_pods.go:513] "Hostname for pod was too long, truncated it" podName="kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" hostnameMaxLen=63 truncatedHostname="kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-21291" Jul 2 07:56:23.833975 env[1227]: time="2024-07-02T07:56:23.833918951Z" level=info msg="CreateContainer within sandbox \"2782c6de0a47ce4b25d4d85a140910dfdf4de867b959897ab260041a5478b729\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:56:23.836041 kubelet[1742]: W0702 07:56:23.835868 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:23.836041 kubelet[1742]: E0702 07:56:23.836012 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.128.0.47:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:23.853922 systemd[1]: Started cri-containerd-c8bb6f59807ba4854d02740687372c917a39e2ecc47aba44b3b8e06b395a24f9.scope. Jul 2 07:56:23.860024 systemd[1]: Started cri-containerd-a78ca89389b7d29f8deae314fea1f2bf4cebe13116d0ac43c3e59ba4f6b9d3b8.scope. Jul 2 07:56:23.873758 env[1227]: time="2024-07-02T07:56:23.872078226Z" level=info msg="CreateContainer within sandbox \"2782c6de0a47ce4b25d4d85a140910dfdf4de867b959897ab260041a5478b729\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"38138472811a07df268008500a5756414345b217a887997a6b52459678f07062\"" Jul 2 07:56:23.874313 env[1227]: time="2024-07-02T07:56:23.874269888Z" level=info msg="StartContainer for \"38138472811a07df268008500a5756414345b217a887997a6b52459678f07062\"" Jul 2 07:56:23.915514 systemd[1]: Started cri-containerd-38138472811a07df268008500a5756414345b217a887997a6b52459678f07062.scope. Jul 2 07:56:23.996613 env[1227]: time="2024-07-02T07:56:23.996543231Z" level=info msg="StartContainer for \"a78ca89389b7d29f8deae314fea1f2bf4cebe13116d0ac43c3e59ba4f6b9d3b8\" returns successfully" Jul 2 07:56:23.997463 env[1227]: time="2024-07-02T07:56:23.997415619Z" level=info msg="StartContainer for \"c8bb6f59807ba4854d02740687372c917a39e2ecc47aba44b3b8e06b395a24f9\" returns successfully" Jul 2 07:56:24.042003 kubelet[1742]: E0702 07:56:24.041838 1742 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.128.0.47:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal?timeout=10s\": dial tcp 10.128.0.47:6443: connect: connection refused" interval="1.6s" Jul 2 07:56:24.053713 kubelet[1742]: W0702 07:56:24.053629 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:24.053713 kubelet[1742]: E0702 07:56:24.053722 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.128.0.47:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal&limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:24.087927 kubelet[1742]: W0702 07:56:24.087018 1742 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:24.087927 kubelet[1742]: E0702 07:56:24.087079 1742 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.128.0.47:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.128.0.47:6443: connect: connection refused Jul 2 07:56:24.103116 env[1227]: time="2024-07-02T07:56:24.103052208Z" level=info msg="StartContainer for \"38138472811a07df268008500a5756414345b217a887997a6b52459678f07062\" returns successfully" Jul 2 07:56:24.164241 kubelet[1742]: I0702 07:56:24.164200 1742 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:27.593762 kubelet[1742]: E0702 07:56:27.593698 1742 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" not found" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:27.601852 kubelet[1742]: I0702 07:56:27.601801 1742 apiserver.go:52] "Watching apiserver" Jul 2 07:56:27.637126 kubelet[1742]: I0702 07:56:27.637085 1742 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:56:27.656871 kubelet[1742]: I0702 07:56:27.656815 1742 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:29.439136 systemd[1]: Reloading. Jul 2 07:56:29.510643 kubelet[1742]: W0702 07:56:29.510607 1742 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:56:29.571507 /usr/lib/systemd/system-generators/torcx-generator[2033]: time="2024-07-02T07:56:29Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.5 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.5 /var/lib/torcx/store]" Jul 2 07:56:29.572137 /usr/lib/systemd/system-generators/torcx-generator[2033]: time="2024-07-02T07:56:29Z" level=info msg="torcx already run" Jul 2 07:56:29.685072 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Jul 2 07:56:29.685100 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Jul 2 07:56:29.712580 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:56:29.880747 systemd[1]: Stopping kubelet.service... Jul 2 07:56:29.903071 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:56:29.903364 systemd[1]: Stopped kubelet.service. Jul 2 07:56:29.903473 systemd[1]: kubelet.service: Consumed 1.109s CPU time. Jul 2 07:56:29.906604 systemd[1]: Starting kubelet.service... Jul 2 07:56:30.175803 systemd[1]: Started kubelet.service. Jul 2 07:56:30.265112 kubelet[2081]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:56:30.265567 kubelet[2081]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:56:30.265644 kubelet[2081]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:56:30.265813 kubelet[2081]: I0702 07:56:30.265774 2081 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:56:30.272578 kubelet[2081]: I0702 07:56:30.272505 2081 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" Jul 2 07:56:30.272578 kubelet[2081]: I0702 07:56:30.272541 2081 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:56:30.272971 kubelet[2081]: I0702 07:56:30.272938 2081 server.go:927] "Client rotation is on, will bootstrap in background" Jul 2 07:56:30.277014 kubelet[2081]: I0702 07:56:30.276982 2081 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:56:30.279648 kubelet[2081]: I0702 07:56:30.279623 2081 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:56:30.290778 kubelet[2081]: I0702 07:56:30.290743 2081 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:56:30.291696 kubelet[2081]: I0702 07:56:30.291641 2081 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:56:30.292310 kubelet[2081]: I0702 07:56:30.291920 2081 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:56:30.292607 kubelet[2081]: I0702 07:56:30.292587 2081 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:56:30.292737 kubelet[2081]: I0702 07:56:30.292721 2081 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:56:30.292935 kubelet[2081]: I0702 07:56:30.292913 2081 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:56:30.294005 kubelet[2081]: I0702 07:56:30.293214 2081 kubelet.go:400] "Attempting to sync node with API server" Jul 2 07:56:30.294171 kubelet[2081]: I0702 07:56:30.294154 2081 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:56:30.294324 kubelet[2081]: I0702 07:56:30.294311 2081 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:56:30.297963 kubelet[2081]: I0702 07:56:30.297938 2081 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:56:30.305261 kubelet[2081]: I0702 07:56:30.304957 2081 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Jul 2 07:56:30.305261 kubelet[2081]: I0702 07:56:30.305265 2081 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:56:30.309920 kubelet[2081]: I0702 07:56:30.306009 2081 server.go:1264] "Started kubelet" Jul 2 07:56:30.309920 kubelet[2081]: I0702 07:56:30.308856 2081 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:56:30.317411 kubelet[2081]: I0702 07:56:30.317339 2081 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:56:30.318968 kubelet[2081]: I0702 07:56:30.318936 2081 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:56:30.321714 kubelet[2081]: I0702 07:56:30.321686 2081 server.go:455] "Adding debug handlers to kubelet server" Jul 2 07:56:30.322041 kubelet[2081]: I0702 07:56:30.322011 2081 reconciler.go:26] "Reconciler: start to sync state" Jul 2 07:56:30.323220 kubelet[2081]: I0702 07:56:30.321806 2081 desired_state_of_world_populator.go:149] "Desired state populator starts to run" Jul 2 07:56:30.323462 kubelet[2081]: I0702 07:56:30.323402 2081 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:56:30.323822 kubelet[2081]: I0702 07:56:30.323800 2081 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:56:30.326501 kubelet[2081]: I0702 07:56:30.325490 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:56:30.327465 kubelet[2081]: I0702 07:56:30.327434 2081 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:56:30.327465 kubelet[2081]: I0702 07:56:30.327475 2081 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:56:30.327751 kubelet[2081]: I0702 07:56:30.327501 2081 kubelet.go:2337] "Starting kubelet main sync loop" Jul 2 07:56:30.327751 kubelet[2081]: E0702 07:56:30.327568 2081 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:56:30.350178 kubelet[2081]: I0702 07:56:30.350144 2081 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:56:30.350554 kubelet[2081]: I0702 07:56:30.350523 2081 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:56:30.370046 kubelet[2081]: E0702 07:56:30.370008 2081 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:56:30.373154 kubelet[2081]: I0702 07:56:30.372166 2081 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:56:30.428084 kubelet[2081]: I0702 07:56:30.425996 2081 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:56:30.428364 kubelet[2081]: I0702 07:56:30.428337 2081 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:56:30.428557 kubelet[2081]: I0702 07:56:30.428540 2081 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:56:30.428754 kubelet[2081]: I0702 07:56:30.428728 2081 kubelet_node_status.go:73] "Attempting to register node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.430936 kubelet[2081]: E0702 07:56:30.429337 2081 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:56:30.432735 kubelet[2081]: I0702 07:56:30.431706 2081 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:56:30.432735 kubelet[2081]: I0702 07:56:30.431732 2081 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:56:30.432735 kubelet[2081]: I0702 07:56:30.431772 2081 policy_none.go:49] "None policy: Start" Jul 2 07:56:30.433016 kubelet[2081]: I0702 07:56:30.432776 2081 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:56:30.433016 kubelet[2081]: I0702 07:56:30.432803 2081 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:56:30.433132 kubelet[2081]: I0702 07:56:30.433022 2081 state_mem.go:75] "Updated machine memory state" Jul 2 07:56:30.445479 kubelet[2081]: I0702 07:56:30.440262 2081 kubelet_node_status.go:112] "Node was previously registered" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.445479 kubelet[2081]: I0702 07:56:30.440361 2081 kubelet_node_status.go:76] "Successfully registered node" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.445479 kubelet[2081]: I0702 07:56:30.443599 2081 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:56:30.445479 kubelet[2081]: I0702 07:56:30.444000 2081 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jul 2 07:56:30.449378 kubelet[2081]: I0702 07:56:30.448116 2081 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:56:30.465358 sudo[2111]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Jul 2 07:56:30.466416 sudo[2111]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Jul 2 07:56:30.629553 kubelet[2081]: I0702 07:56:30.629486 2081 topology_manager.go:215] "Topology Admit Handler" podUID="06cf132f01b678ae867651e96fc35319" podNamespace="kube-system" podName="kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.629760 kubelet[2081]: I0702 07:56:30.629624 2081 topology_manager.go:215] "Topology Admit Handler" podUID="b67e07c667a65fb7f141ca1b7fc71f8f" podNamespace="kube-system" podName="kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.629760 kubelet[2081]: I0702 07:56:30.629720 2081 topology_manager.go:215] "Topology Admit Handler" podUID="063fcc115552124588780c861080782a" podNamespace="kube-system" podName="kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.641259 kubelet[2081]: W0702 07:56:30.641203 2081 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:56:30.641969 kubelet[2081]: W0702 07:56:30.641942 2081 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:56:30.642113 kubelet[2081]: E0702 07:56:30.642039 2081 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.642566 kubelet[2081]: W0702 07:56:30.642533 2081 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:56:30.725642 kubelet[2081]: I0702 07:56:30.725483 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-kubeconfig\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.725642 kubelet[2081]: I0702 07:56:30.725541 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.725642 kubelet[2081]: I0702 07:56:30.725576 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-flexvolume-dir\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.725642 kubelet[2081]: I0702 07:56:30.725606 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-k8s-certs\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.726013 kubelet[2081]: I0702 07:56:30.725640 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-ca-certs\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.726013 kubelet[2081]: I0702 07:56:30.725669 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-k8s-certs\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.726013 kubelet[2081]: I0702 07:56:30.725697 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/063fcc115552124588780c861080782a-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"063fcc115552124588780c861080782a\") " pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.726013 kubelet[2081]: I0702 07:56:30.725726 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/06cf132f01b678ae867651e96fc35319-kubeconfig\") pod \"kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"06cf132f01b678ae867651e96fc35319\") " pod="kube-system/kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:30.726220 kubelet[2081]: I0702 07:56:30.725753 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b67e07c667a65fb7f141ca1b7fc71f8f-ca-certs\") pod \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" (UID: \"b67e07c667a65fb7f141ca1b7fc71f8f\") " pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:31.181001 sudo[2111]: pam_unix(sudo:session): session closed for user root Jul 2 07:56:31.299633 kubelet[2081]: I0702 07:56:31.299565 2081 apiserver.go:52] "Watching apiserver" Jul 2 07:56:31.323978 kubelet[2081]: I0702 07:56:31.323921 2081 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Jul 2 07:56:31.400033 kubelet[2081]: W0702 07:56:31.400002 2081 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must be no more than 63 characters must not contain dots] Jul 2 07:56:31.403467 kubelet[2081]: E0702 07:56:31.403415 2081 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" already exists" pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" Jul 2 07:56:31.431227 kubelet[2081]: I0702 07:56:31.431049 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" podStartSLOduration=2.431021124 podStartE2EDuration="2.431021124s" podCreationTimestamp="2024-07-02 07:56:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:31.42882009 +0000 UTC m=+1.244194728" watchObservedRunningTime="2024-07-02 07:56:31.431021124 +0000 UTC m=+1.246395753" Jul 2 07:56:31.454730 kubelet[2081]: I0702 07:56:31.454643 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" podStartSLOduration=1.454615894 podStartE2EDuration="1.454615894s" podCreationTimestamp="2024-07-02 07:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:31.444286986 +0000 UTC m=+1.259661624" watchObservedRunningTime="2024-07-02 07:56:31.454615894 +0000 UTC m=+1.269990526" Jul 2 07:56:31.455188 kubelet[2081]: I0702 07:56:31.455142 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" podStartSLOduration=1.455127738 podStartE2EDuration="1.455127738s" podCreationTimestamp="2024-07-02 07:56:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:31.453677226 +0000 UTC m=+1.269051855" watchObservedRunningTime="2024-07-02 07:56:31.455127738 +0000 UTC m=+1.270502378" Jul 2 07:56:33.316320 sudo[1411]: pam_unix(sudo:session): session closed for user root Jul 2 07:56:33.360331 sshd[1408]: pam_unix(sshd:session): session closed for user core Jul 2 07:56:33.365319 systemd-logind[1236]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:56:33.365611 systemd[1]: sshd@4-10.128.0.47:22-147.75.109.163:40798.service: Deactivated successfully. Jul 2 07:56:33.366724 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:56:33.366868 systemd[1]: session-5.scope: Consumed 6.643s CPU time. Jul 2 07:56:33.368390 systemd-logind[1236]: Removed session 5. Jul 2 07:56:35.787199 update_engine[1215]: I0702 07:56:35.787101 1215 update_attempter.cc:509] Updating boot flags... Jul 2 07:56:45.601710 kubelet[2081]: I0702 07:56:45.601245 2081 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:56:45.603214 env[1227]: time="2024-07-02T07:56:45.603158617Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:56:45.604526 kubelet[2081]: I0702 07:56:45.604180 2081 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:56:46.511736 kubelet[2081]: I0702 07:56:46.511648 2081 topology_manager.go:215] "Topology Admit Handler" podUID="eb5c079a-d75b-405f-8d7b-04edc5c0a8ab" podNamespace="kube-system" podName="kube-proxy-7zv65" Jul 2 07:56:46.521115 systemd[1]: Created slice kubepods-besteffort-podeb5c079a_d75b_405f_8d7b_04edc5c0a8ab.slice. Jul 2 07:56:46.525766 kubelet[2081]: W0702 07:56:46.525732 2081 reflector.go:547] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal' and this object Jul 2 07:56:46.526100 kubelet[2081]: E0702 07:56:46.526068 2081 reflector.go:150] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal' and this object Jul 2 07:56:46.526325 kubelet[2081]: W0702 07:56:46.526306 2081 reflector.go:547] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal' and this object Jul 2 07:56:46.526461 kubelet[2081]: E0702 07:56:46.526447 2081 reflector.go:150] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal' and this object Jul 2 07:56:46.529099 kubelet[2081]: I0702 07:56:46.529069 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-proxy\") pod \"kube-proxy-7zv65\" (UID: \"eb5c079a-d75b-405f-8d7b-04edc5c0a8ab\") " pod="kube-system/kube-proxy-7zv65" Jul 2 07:56:46.529336 kubelet[2081]: I0702 07:56:46.529301 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdcnp\" (UniqueName: \"kubernetes.io/projected/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-api-access-fdcnp\") pod \"kube-proxy-7zv65\" (UID: \"eb5c079a-d75b-405f-8d7b-04edc5c0a8ab\") " pod="kube-system/kube-proxy-7zv65" Jul 2 07:56:46.529486 kubelet[2081]: I0702 07:56:46.529465 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-xtables-lock\") pod \"kube-proxy-7zv65\" (UID: \"eb5c079a-d75b-405f-8d7b-04edc5c0a8ab\") " pod="kube-system/kube-proxy-7zv65" Jul 2 07:56:46.529634 kubelet[2081]: I0702 07:56:46.529615 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-lib-modules\") pod \"kube-proxy-7zv65\" (UID: \"eb5c079a-d75b-405f-8d7b-04edc5c0a8ab\") " pod="kube-system/kube-proxy-7zv65" Jul 2 07:56:46.544555 kubelet[2081]: I0702 07:56:46.544504 2081 topology_manager.go:215] "Topology Admit Handler" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" podNamespace="kube-system" podName="cilium-wkt52" Jul 2 07:56:46.556940 systemd[1]: Created slice kubepods-burstable-pod44a80c27_3fc3_4d84_920e_71d443f5afc0.slice. Jul 2 07:56:46.630786 kubelet[2081]: I0702 07:56:46.630730 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-run\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.631436 kubelet[2081]: I0702 07:56:46.631397 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-etc-cni-netd\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.631605 kubelet[2081]: I0702 07:56:46.631579 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-config-path\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.631764 kubelet[2081]: I0702 07:56:46.631744 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-hubble-tls\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.631955 kubelet[2081]: I0702 07:56:46.631936 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-hostproc\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632125 kubelet[2081]: I0702 07:56:46.632105 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-cgroup\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632274 kubelet[2081]: I0702 07:56:46.632259 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-lib-modules\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632399 kubelet[2081]: I0702 07:56:46.632381 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-bpf-maps\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632502 kubelet[2081]: I0702 07:56:46.632490 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a80c27-3fc3-4d84-920e-71d443f5afc0-clustermesh-secrets\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632602 kubelet[2081]: I0702 07:56:46.632590 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cni-path\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632713 kubelet[2081]: I0702 07:56:46.632701 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-net\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.632874 kubelet[2081]: I0702 07:56:46.632855 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v2mdh\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.633028 kubelet[2081]: I0702 07:56:46.633001 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-xtables-lock\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.633113 kubelet[2081]: I0702 07:56:46.633042 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-kernel\") pod \"cilium-wkt52\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " pod="kube-system/cilium-wkt52" Jul 2 07:56:46.711291 kubelet[2081]: I0702 07:56:46.711231 2081 topology_manager.go:215] "Topology Admit Handler" podUID="00977a18-a311-4fc8-b6e5-93e3844870c6" podNamespace="kube-system" podName="cilium-operator-599987898-xbrcp" Jul 2 07:56:46.719524 systemd[1]: Created slice kubepods-besteffort-pod00977a18_a311_4fc8_b6e5_93e3844870c6.slice. Jul 2 07:56:46.735572 kubelet[2081]: I0702 07:56:46.735507 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00977a18-a311-4fc8-b6e5-93e3844870c6-cilium-config-path\") pod \"cilium-operator-599987898-xbrcp\" (UID: \"00977a18-a311-4fc8-b6e5-93e3844870c6\") " pod="kube-system/cilium-operator-599987898-xbrcp" Jul 2 07:56:46.735911 kubelet[2081]: I0702 07:56:46.735854 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lp6cz\" (UniqueName: \"kubernetes.io/projected/00977a18-a311-4fc8-b6e5-93e3844870c6-kube-api-access-lp6cz\") pod \"cilium-operator-599987898-xbrcp\" (UID: \"00977a18-a311-4fc8-b6e5-93e3844870c6\") " pod="kube-system/cilium-operator-599987898-xbrcp" Jul 2 07:56:47.634084 kubelet[2081]: E0702 07:56:47.634012 2081 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.634693 kubelet[2081]: E0702 07:56:47.634169 2081 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-proxy podName:eb5c079a-d75b-405f-8d7b-04edc5c0a8ab nodeName:}" failed. No retries permitted until 2024-07-02 07:56:48.134135177 +0000 UTC m=+17.949509812 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-proxy") pod "kube-proxy-7zv65" (UID: "eb5c079a-d75b-405f-8d7b-04edc5c0a8ab") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.639408 kubelet[2081]: E0702 07:56:47.639334 2081 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.639408 kubelet[2081]: E0702 07:56:47.639398 2081 projected.go:200] Error preparing data for projected volume kube-api-access-fdcnp for pod kube-system/kube-proxy-7zv65: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.639719 kubelet[2081]: E0702 07:56:47.639506 2081 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-api-access-fdcnp podName:eb5c079a-d75b-405f-8d7b-04edc5c0a8ab nodeName:}" failed. No retries permitted until 2024-07-02 07:56:48.139480304 +0000 UTC m=+17.954854943 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-fdcnp" (UniqueName: "kubernetes.io/projected/eb5c079a-d75b-405f-8d7b-04edc5c0a8ab-kube-api-access-fdcnp") pod "kube-proxy-7zv65" (UID: "eb5c079a-d75b-405f-8d7b-04edc5c0a8ab") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.776519 kubelet[2081]: E0702 07:56:47.776464 2081 projected.go:294] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.776797 kubelet[2081]: E0702 07:56:47.776758 2081 projected.go:200] Error preparing data for projected volume kube-api-access-v2mdh for pod kube-system/cilium-wkt52: failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.776947 kubelet[2081]: E0702 07:56:47.776868 2081 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh podName:44a80c27-3fc3-4d84-920e-71d443f5afc0 nodeName:}" failed. No retries permitted until 2024-07-02 07:56:48.276839698 +0000 UTC m=+18.092214334 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-v2mdh" (UniqueName: "kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh") pod "cilium-wkt52" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0") : failed to sync configmap cache: timed out waiting for the condition Jul 2 07:56:47.924856 env[1227]: time="2024-07-02T07:56:47.924685174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xbrcp,Uid:00977a18-a311-4fc8-b6e5-93e3844870c6,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:47.960755 env[1227]: time="2024-07-02T07:56:47.960643811Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:47.960755 env[1227]: time="2024-07-02T07:56:47.960705274Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:47.961147 env[1227]: time="2024-07-02T07:56:47.960733744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:47.961961 env[1227]: time="2024-07-02T07:56:47.961205605Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772 pid=2174 runtime=io.containerd.runc.v2 Jul 2 07:56:47.991448 systemd[1]: run-containerd-runc-k8s.io-3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772-runc.jTuqML.mount: Deactivated successfully. Jul 2 07:56:48.000143 systemd[1]: Started cri-containerd-3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772.scope. Jul 2 07:56:48.065496 env[1227]: time="2024-07-02T07:56:48.065016892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-599987898-xbrcp,Uid:00977a18-a311-4fc8-b6e5-93e3844870c6,Namespace:kube-system,Attempt:0,} returns sandbox id \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\"" Jul 2 07:56:48.069144 env[1227]: time="2024-07-02T07:56:48.067725951Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Jul 2 07:56:48.338378 env[1227]: time="2024-07-02T07:56:48.337772510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zv65,Uid:eb5c079a-d75b-405f-8d7b-04edc5c0a8ab,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:48.362315 env[1227]: time="2024-07-02T07:56:48.362191436Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:48.362315 env[1227]: time="2024-07-02T07:56:48.362264561Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:48.362315 env[1227]: time="2024-07-02T07:56:48.362284068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:48.363136 env[1227]: time="2024-07-02T07:56:48.362969145Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f592345f97442810a63153d7a7759ee9f1e8cefaeac52264650dad9a11cb6f8c pid=2219 runtime=io.containerd.runc.v2 Jul 2 07:56:48.374214 env[1227]: time="2024-07-02T07:56:48.374159155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wkt52,Uid:44a80c27-3fc3-4d84-920e-71d443f5afc0,Namespace:kube-system,Attempt:0,}" Jul 2 07:56:48.385846 systemd[1]: Started cri-containerd-f592345f97442810a63153d7a7759ee9f1e8cefaeac52264650dad9a11cb6f8c.scope. Jul 2 07:56:48.408910 env[1227]: time="2024-07-02T07:56:48.408700822Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:56:48.408910 env[1227]: time="2024-07-02T07:56:48.408758669Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:56:48.409248 env[1227]: time="2024-07-02T07:56:48.408776006Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:56:48.409248 env[1227]: time="2024-07-02T07:56:48.409112988Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0 pid=2251 runtime=io.containerd.runc.v2 Jul 2 07:56:48.438115 systemd[1]: Started cri-containerd-7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0.scope. Jul 2 07:56:48.470691 env[1227]: time="2024-07-02T07:56:48.470054638Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7zv65,Uid:eb5c079a-d75b-405f-8d7b-04edc5c0a8ab,Namespace:kube-system,Attempt:0,} returns sandbox id \"f592345f97442810a63153d7a7759ee9f1e8cefaeac52264650dad9a11cb6f8c\"" Jul 2 07:56:48.477152 env[1227]: time="2024-07-02T07:56:48.477087208Z" level=info msg="CreateContainer within sandbox \"f592345f97442810a63153d7a7759ee9f1e8cefaeac52264650dad9a11cb6f8c\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:56:48.499627 env[1227]: time="2024-07-02T07:56:48.499490511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-wkt52,Uid:44a80c27-3fc3-4d84-920e-71d443f5afc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\"" Jul 2 07:56:48.506761 env[1227]: time="2024-07-02T07:56:48.506696874Z" level=info msg="CreateContainer within sandbox \"f592345f97442810a63153d7a7759ee9f1e8cefaeac52264650dad9a11cb6f8c\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"708646b962f81cb157a6b93712d218f0a9e05d82f26e5cc8b5807429675293a1\"" Jul 2 07:56:48.508581 env[1227]: time="2024-07-02T07:56:48.507455304Z" level=info msg="StartContainer for \"708646b962f81cb157a6b93712d218f0a9e05d82f26e5cc8b5807429675293a1\"" Jul 2 07:56:48.532358 systemd[1]: Started cri-containerd-708646b962f81cb157a6b93712d218f0a9e05d82f26e5cc8b5807429675293a1.scope. Jul 2 07:56:48.587063 env[1227]: time="2024-07-02T07:56:48.586995387Z" level=info msg="StartContainer for \"708646b962f81cb157a6b93712d218f0a9e05d82f26e5cc8b5807429675293a1\" returns successfully" Jul 2 07:56:49.003783 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1970115674.mount: Deactivated successfully. Jul 2 07:56:49.449818 kubelet[2081]: I0702 07:56:49.449748 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7zv65" podStartSLOduration=3.449706559 podStartE2EDuration="3.449706559s" podCreationTimestamp="2024-07-02 07:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:56:49.449545715 +0000 UTC m=+19.264920355" watchObservedRunningTime="2024-07-02 07:56:49.449706559 +0000 UTC m=+19.265081197" Jul 2 07:56:49.924284 env[1227]: time="2024-07-02T07:56:49.924205701Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:49.927196 env[1227]: time="2024-07-02T07:56:49.927143617Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:49.929969 env[1227]: time="2024-07-02T07:56:49.929926136Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:49.930944 env[1227]: time="2024-07-02T07:56:49.930873818Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Jul 2 07:56:49.934943 env[1227]: time="2024-07-02T07:56:49.934070152Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Jul 2 07:56:49.937348 env[1227]: time="2024-07-02T07:56:49.937287255Z" level=info msg="CreateContainer within sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Jul 2 07:56:49.958304 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3964953753.mount: Deactivated successfully. Jul 2 07:56:49.973488 env[1227]: time="2024-07-02T07:56:49.973420968Z" level=info msg="CreateContainer within sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\"" Jul 2 07:56:49.975745 env[1227]: time="2024-07-02T07:56:49.974543930Z" level=info msg="StartContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\"" Jul 2 07:56:50.009014 systemd[1]: Started cri-containerd-cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50.scope. Jul 2 07:56:50.058194 env[1227]: time="2024-07-02T07:56:50.058129084Z" level=info msg="StartContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" returns successfully" Jul 2 07:56:56.096826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3263284529.mount: Deactivated successfully. Jul 2 07:56:59.586057 env[1227]: time="2024-07-02T07:56:59.585983261Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:59.589257 env[1227]: time="2024-07-02T07:56:59.589205703Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:59.591694 env[1227]: time="2024-07-02T07:56:59.591647433Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Jul 2 07:56:59.592547 env[1227]: time="2024-07-02T07:56:59.592498509Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Jul 2 07:56:59.600142 env[1227]: time="2024-07-02T07:56:59.600086331Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:56:59.624280 env[1227]: time="2024-07-02T07:56:59.624174221Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\"" Jul 2 07:56:59.626423 env[1227]: time="2024-07-02T07:56:59.626378175Z" level=info msg="StartContainer for \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\"" Jul 2 07:56:59.663615 systemd[1]: run-containerd-runc-k8s.io-5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9-runc.tazsDd.mount: Deactivated successfully. Jul 2 07:56:59.668750 systemd[1]: Started cri-containerd-5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9.scope. Jul 2 07:56:59.712829 env[1227]: time="2024-07-02T07:56:59.712764783Z" level=info msg="StartContainer for \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\" returns successfully" Jul 2 07:56:59.726132 systemd[1]: cri-containerd-5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9.scope: Deactivated successfully. Jul 2 07:57:00.581988 kubelet[2081]: I0702 07:57:00.581914 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-operator-599987898-xbrcp" podStartSLOduration=12.716126126 podStartE2EDuration="14.581853879s" podCreationTimestamp="2024-07-02 07:56:46 +0000 UTC" firstStartedPulling="2024-07-02 07:56:48.067029169 +0000 UTC m=+17.882403780" lastFinishedPulling="2024-07-02 07:56:49.932756897 +0000 UTC m=+19.748131533" observedRunningTime="2024-07-02 07:56:50.501145439 +0000 UTC m=+20.316520070" watchObservedRunningTime="2024-07-02 07:57:00.581853879 +0000 UTC m=+30.397228511" Jul 2 07:57:00.616250 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9-rootfs.mount: Deactivated successfully. Jul 2 07:57:01.805820 env[1227]: time="2024-07-02T07:57:01.805504282Z" level=info msg="shim disconnected" id=5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9 Jul 2 07:57:01.805820 env[1227]: time="2024-07-02T07:57:01.805573118Z" level=warning msg="cleaning up after shim disconnected" id=5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9 namespace=k8s.io Jul 2 07:57:01.805820 env[1227]: time="2024-07-02T07:57:01.805590278Z" level=info msg="cleaning up dead shim" Jul 2 07:57:01.817283 env[1227]: time="2024-07-02T07:57:01.817213866Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2545 runtime=io.containerd.runc.v2\n" Jul 2 07:57:02.567658 env[1227]: time="2024-07-02T07:57:02.567599835Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:57:02.588582 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3322640760.mount: Deactivated successfully. Jul 2 07:57:02.599795 env[1227]: time="2024-07-02T07:57:02.599725264Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\"" Jul 2 07:57:02.600847 env[1227]: time="2024-07-02T07:57:02.600798308Z" level=info msg="StartContainer for \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\"" Jul 2 07:57:02.640375 systemd[1]: Started cri-containerd-878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361.scope. Jul 2 07:57:02.691407 env[1227]: time="2024-07-02T07:57:02.691342656Z" level=info msg="StartContainer for \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\" returns successfully" Jul 2 07:57:02.703266 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:57:02.704640 systemd[1]: Stopped systemd-sysctl.service. Jul 2 07:57:02.705220 systemd[1]: Stopping systemd-sysctl.service... Jul 2 07:57:02.709025 systemd[1]: Starting systemd-sysctl.service... Jul 2 07:57:02.710383 systemd[1]: cri-containerd-878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361.scope: Deactivated successfully. Jul 2 07:57:02.724967 systemd[1]: Finished systemd-sysctl.service. Jul 2 07:57:02.749917 env[1227]: time="2024-07-02T07:57:02.749823588Z" level=info msg="shim disconnected" id=878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361 Jul 2 07:57:02.750259 env[1227]: time="2024-07-02T07:57:02.750037705Z" level=warning msg="cleaning up after shim disconnected" id=878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361 namespace=k8s.io Jul 2 07:57:02.750259 env[1227]: time="2024-07-02T07:57:02.750062360Z" level=info msg="cleaning up dead shim" Jul 2 07:57:02.764399 env[1227]: time="2024-07-02T07:57:02.764340982Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2607 runtime=io.containerd.runc.v2\n" Jul 2 07:57:03.574509 env[1227]: time="2024-07-02T07:57:03.574445792Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:57:03.584612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361-rootfs.mount: Deactivated successfully. Jul 2 07:57:03.611977 env[1227]: time="2024-07-02T07:57:03.611904528Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\"" Jul 2 07:57:03.613920 env[1227]: time="2024-07-02T07:57:03.612903658Z" level=info msg="StartContainer for \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\"" Jul 2 07:57:03.653288 systemd[1]: Started cri-containerd-69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f.scope. Jul 2 07:57:03.710005 env[1227]: time="2024-07-02T07:57:03.709873064Z" level=info msg="StartContainer for \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\" returns successfully" Jul 2 07:57:03.713271 systemd[1]: cri-containerd-69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f.scope: Deactivated successfully. Jul 2 07:57:03.747359 env[1227]: time="2024-07-02T07:57:03.747287988Z" level=info msg="shim disconnected" id=69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f Jul 2 07:57:03.747359 env[1227]: time="2024-07-02T07:57:03.747349798Z" level=warning msg="cleaning up after shim disconnected" id=69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f namespace=k8s.io Jul 2 07:57:03.747359 env[1227]: time="2024-07-02T07:57:03.747364379Z" level=info msg="cleaning up dead shim" Jul 2 07:57:03.759991 env[1227]: time="2024-07-02T07:57:03.759926894Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2668 runtime=io.containerd.runc.v2\n" Jul 2 07:57:04.578337 env[1227]: time="2024-07-02T07:57:04.577988851Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:57:04.584597 systemd[1]: run-containerd-runc-k8s.io-69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f-runc.n2GNlf.mount: Deactivated successfully. Jul 2 07:57:04.584771 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f-rootfs.mount: Deactivated successfully. Jul 2 07:57:04.604435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3924339371.mount: Deactivated successfully. Jul 2 07:57:04.616994 env[1227]: time="2024-07-02T07:57:04.616925233Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\"" Jul 2 07:57:04.618122 env[1227]: time="2024-07-02T07:57:04.618073068Z" level=info msg="StartContainer for \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\"" Jul 2 07:57:04.680026 systemd[1]: Started cri-containerd-dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289.scope. Jul 2 07:57:04.769541 systemd[1]: cri-containerd-dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289.scope: Deactivated successfully. Jul 2 07:57:04.776171 env[1227]: time="2024-07-02T07:57:04.776078355Z" level=info msg="StartContainer for \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\" returns successfully" Jul 2 07:57:04.812838 env[1227]: time="2024-07-02T07:57:04.812765549Z" level=info msg="shim disconnected" id=dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289 Jul 2 07:57:04.813393 env[1227]: time="2024-07-02T07:57:04.813359625Z" level=warning msg="cleaning up after shim disconnected" id=dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289 namespace=k8s.io Jul 2 07:57:04.813542 env[1227]: time="2024-07-02T07:57:04.813519977Z" level=info msg="cleaning up dead shim" Jul 2 07:57:04.826190 env[1227]: time="2024-07-02T07:57:04.826126964Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:57:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2724 runtime=io.containerd.runc.v2\n" Jul 2 07:57:05.584178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289-rootfs.mount: Deactivated successfully. Jul 2 07:57:05.595046 env[1227]: time="2024-07-02T07:57:05.594965580Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:57:05.621775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount249473120.mount: Deactivated successfully. Jul 2 07:57:05.634874 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3983418330.mount: Deactivated successfully. Jul 2 07:57:05.636486 env[1227]: time="2024-07-02T07:57:05.636443850Z" level=info msg="CreateContainer within sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\"" Jul 2 07:57:05.639091 env[1227]: time="2024-07-02T07:57:05.637566781Z" level=info msg="StartContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\"" Jul 2 07:57:05.663493 systemd[1]: Started cri-containerd-33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85.scope. Jul 2 07:57:05.718249 env[1227]: time="2024-07-02T07:57:05.718170975Z" level=info msg="StartContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" returns successfully" Jul 2 07:57:05.882629 kubelet[2081]: I0702 07:57:05.882207 2081 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:57:05.911972 kubelet[2081]: I0702 07:57:05.911918 2081 topology_manager.go:215] "Topology Admit Handler" podUID="9baaa41c-2ca2-482c-8f46-ff878a748b54" podNamespace="kube-system" podName="coredns-7db6d8ff4d-7kxch" Jul 2 07:57:05.921108 systemd[1]: Created slice kubepods-burstable-pod9baaa41c_2ca2_482c_8f46_ff878a748b54.slice. Jul 2 07:57:05.935416 kubelet[2081]: I0702 07:57:05.935370 2081 topology_manager.go:215] "Topology Admit Handler" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" podNamespace="kube-system" podName="coredns-7db6d8ff4d-5lz68" Jul 2 07:57:05.944095 systemd[1]: Created slice kubepods-burstable-pod7975776e_79b6_4a0e_9f03_52ab481dc130.slice. Jul 2 07:57:06.080235 kubelet[2081]: I0702 07:57:06.080184 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7975776e-79b6-4a0e-9f03-52ab481dc130-config-volume\") pod \"coredns-7db6d8ff4d-5lz68\" (UID: \"7975776e-79b6-4a0e-9f03-52ab481dc130\") " pod="kube-system/coredns-7db6d8ff4d-5lz68" Jul 2 07:57:06.080235 kubelet[2081]: I0702 07:57:06.080240 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlkzk\" (UniqueName: \"kubernetes.io/projected/7975776e-79b6-4a0e-9f03-52ab481dc130-kube-api-access-wlkzk\") pod \"coredns-7db6d8ff4d-5lz68\" (UID: \"7975776e-79b6-4a0e-9f03-52ab481dc130\") " pod="kube-system/coredns-7db6d8ff4d-5lz68" Jul 2 07:57:06.080632 kubelet[2081]: I0702 07:57:06.080276 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tm56p\" (UniqueName: \"kubernetes.io/projected/9baaa41c-2ca2-482c-8f46-ff878a748b54-kube-api-access-tm56p\") pod \"coredns-7db6d8ff4d-7kxch\" (UID: \"9baaa41c-2ca2-482c-8f46-ff878a748b54\") " pod="kube-system/coredns-7db6d8ff4d-7kxch" Jul 2 07:57:06.080632 kubelet[2081]: I0702 07:57:06.080302 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9baaa41c-2ca2-482c-8f46-ff878a748b54-config-volume\") pod \"coredns-7db6d8ff4d-7kxch\" (UID: \"9baaa41c-2ca2-482c-8f46-ff878a748b54\") " pod="kube-system/coredns-7db6d8ff4d-7kxch" Jul 2 07:57:06.227549 env[1227]: time="2024-07-02T07:57:06.226643965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7kxch,Uid:9baaa41c-2ca2-482c-8f46-ff878a748b54,Namespace:kube-system,Attempt:0,}" Jul 2 07:57:06.271746 env[1227]: time="2024-07-02T07:57:06.271689781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lz68,Uid:7975776e-79b6-4a0e-9f03-52ab481dc130,Namespace:kube-system,Attempt:0,}" Jul 2 07:57:08.077322 systemd-networkd[1029]: cilium_host: Link UP Jul 2 07:57:08.085802 systemd-networkd[1029]: cilium_net: Link UP Jul 2 07:57:08.086086 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Jul 2 07:57:08.085820 systemd-networkd[1029]: cilium_net: Gained carrier Jul 2 07:57:08.086806 systemd-networkd[1029]: cilium_host: Gained carrier Jul 2 07:57:08.219252 systemd-networkd[1029]: cilium_net: Gained IPv6LL Jul 2 07:57:08.243118 systemd-networkd[1029]: cilium_vxlan: Link UP Jul 2 07:57:08.243129 systemd-networkd[1029]: cilium_vxlan: Gained carrier Jul 2 07:57:08.540932 kernel: NET: Registered PF_ALG protocol family Jul 2 07:57:09.091067 systemd-networkd[1029]: cilium_host: Gained IPv6LL Jul 2 07:57:09.437344 systemd-networkd[1029]: lxc_health: Link UP Jul 2 07:57:09.456585 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:57:09.456969 systemd-networkd[1029]: lxc_health: Gained carrier Jul 2 07:57:09.475082 systemd-networkd[1029]: cilium_vxlan: Gained IPv6LL Jul 2 07:57:09.802295 systemd-networkd[1029]: lxc8b7335ac849a: Link UP Jul 2 07:57:09.811923 kernel: eth0: renamed from tmpd29ab Jul 2 07:57:09.827944 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc8b7335ac849a: link becomes ready Jul 2 07:57:09.833356 systemd-networkd[1029]: lxc8b7335ac849a: Gained carrier Jul 2 07:57:09.859827 systemd-networkd[1029]: lxc01406528c479: Link UP Jul 2 07:57:09.872969 kernel: eth0: renamed from tmpd0c47 Jul 2 07:57:09.884164 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc01406528c479: link becomes ready Jul 2 07:57:09.887615 systemd-networkd[1029]: lxc01406528c479: Gained carrier Jul 2 07:57:10.408905 kubelet[2081]: I0702 07:57:10.408806 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-wkt52" podStartSLOduration=13.315672982 podStartE2EDuration="24.408779128s" podCreationTimestamp="2024-07-02 07:56:46 +0000 UTC" firstStartedPulling="2024-07-02 07:56:48.501234359 +0000 UTC m=+18.316608972" lastFinishedPulling="2024-07-02 07:56:59.594340485 +0000 UTC m=+29.409715118" observedRunningTime="2024-07-02 07:57:06.618448042 +0000 UTC m=+36.433822691" watchObservedRunningTime="2024-07-02 07:57:10.408779128 +0000 UTC m=+40.224153767" Jul 2 07:57:10.947122 systemd-networkd[1029]: lxc8b7335ac849a: Gained IPv6LL Jul 2 07:57:11.075185 systemd-networkd[1029]: lxc_health: Gained IPv6LL Jul 2 07:57:11.331592 systemd-networkd[1029]: lxc01406528c479: Gained IPv6LL Jul 2 07:57:15.180599 env[1227]: time="2024-07-02T07:57:15.180111078Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:15.180599 env[1227]: time="2024-07-02T07:57:15.180168791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:15.180599 env[1227]: time="2024-07-02T07:57:15.180188918Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:15.180599 env[1227]: time="2024-07-02T07:57:15.180369748Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d0c473c0be404b370b3857cb3826758da8ce6c3165fa8e3a8dfab1498ac99756 pid=3261 runtime=io.containerd.runc.v2 Jul 2 07:57:15.207252 env[1227]: time="2024-07-02T07:57:15.207110815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:57:15.207602 env[1227]: time="2024-07-02T07:57:15.207517346Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:57:15.207840 env[1227]: time="2024-07-02T07:57:15.207777101Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:57:15.208393 env[1227]: time="2024-07-02T07:57:15.208337466Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d29ab24968294262b355de08e92ba1885f5751e1e53252a39a8aeeeaa0b15b5c pid=3283 runtime=io.containerd.runc.v2 Jul 2 07:57:15.245934 systemd[1]: Started cri-containerd-d0c473c0be404b370b3857cb3826758da8ce6c3165fa8e3a8dfab1498ac99756.scope. Jul 2 07:57:15.291292 systemd[1]: Started cri-containerd-d29ab24968294262b355de08e92ba1885f5751e1e53252a39a8aeeeaa0b15b5c.scope. Jul 2 07:57:15.401701 env[1227]: time="2024-07-02T07:57:15.401571295Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-7kxch,Uid:9baaa41c-2ca2-482c-8f46-ff878a748b54,Namespace:kube-system,Attempt:0,} returns sandbox id \"d29ab24968294262b355de08e92ba1885f5751e1e53252a39a8aeeeaa0b15b5c\"" Jul 2 07:57:15.417945 env[1227]: time="2024-07-02T07:57:15.417440487Z" level=info msg="CreateContainer within sandbox \"d29ab24968294262b355de08e92ba1885f5751e1e53252a39a8aeeeaa0b15b5c\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:57:15.425901 env[1227]: time="2024-07-02T07:57:15.425835470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-5lz68,Uid:7975776e-79b6-4a0e-9f03-52ab481dc130,Namespace:kube-system,Attempt:0,} returns sandbox id \"d0c473c0be404b370b3857cb3826758da8ce6c3165fa8e3a8dfab1498ac99756\"" Jul 2 07:57:15.434591 env[1227]: time="2024-07-02T07:57:15.432987963Z" level=info msg="CreateContainer within sandbox \"d0c473c0be404b370b3857cb3826758da8ce6c3165fa8e3a8dfab1498ac99756\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:57:15.462837 env[1227]: time="2024-07-02T07:57:15.462753301Z" level=info msg="CreateContainer within sandbox \"d29ab24968294262b355de08e92ba1885f5751e1e53252a39a8aeeeaa0b15b5c\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9647c1c90cc59690a24388cf61fc075f598ead97b71d925e4a850bd5e7852a13\"" Jul 2 07:57:15.463979 env[1227]: time="2024-07-02T07:57:15.463912632Z" level=info msg="StartContainer for \"9647c1c90cc59690a24388cf61fc075f598ead97b71d925e4a850bd5e7852a13\"" Jul 2 07:57:15.475242 env[1227]: time="2024-07-02T07:57:15.474636232Z" level=info msg="CreateContainer within sandbox \"d0c473c0be404b370b3857cb3826758da8ce6c3165fa8e3a8dfab1498ac99756\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3925e5b9b6dcb7c67362fbef141d93b543f47a84a2da1b2b0e3cca32f2fe75ef\"" Jul 2 07:57:15.475590 env[1227]: time="2024-07-02T07:57:15.475539677Z" level=info msg="StartContainer for \"3925e5b9b6dcb7c67362fbef141d93b543f47a84a2da1b2b0e3cca32f2fe75ef\"" Jul 2 07:57:15.527942 systemd[1]: Started cri-containerd-9647c1c90cc59690a24388cf61fc075f598ead97b71d925e4a850bd5e7852a13.scope. Jul 2 07:57:15.551802 systemd[1]: Started cri-containerd-3925e5b9b6dcb7c67362fbef141d93b543f47a84a2da1b2b0e3cca32f2fe75ef.scope. Jul 2 07:57:15.655083 env[1227]: time="2024-07-02T07:57:15.655016431Z" level=info msg="StartContainer for \"9647c1c90cc59690a24388cf61fc075f598ead97b71d925e4a850bd5e7852a13\" returns successfully" Jul 2 07:57:15.666027 env[1227]: time="2024-07-02T07:57:15.665962769Z" level=info msg="StartContainer for \"3925e5b9b6dcb7c67362fbef141d93b543f47a84a2da1b2b0e3cca32f2fe75ef\" returns successfully" Jul 2 07:57:16.192988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1272825397.mount: Deactivated successfully. Jul 2 07:57:16.666228 kubelet[2081]: I0702 07:57:16.666156 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-7kxch" podStartSLOduration=30.666129112 podStartE2EDuration="30.666129112s" podCreationTimestamp="2024-07-02 07:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:57:16.662540287 +0000 UTC m=+46.477914926" watchObservedRunningTime="2024-07-02 07:57:16.666129112 +0000 UTC m=+46.481503751" Jul 2 07:57:16.666842 kubelet[2081]: I0702 07:57:16.666328 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-5lz68" podStartSLOduration=30.666316941 podStartE2EDuration="30.666316941s" podCreationTimestamp="2024-07-02 07:56:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:57:16.644224873 +0000 UTC m=+46.459599511" watchObservedRunningTime="2024-07-02 07:57:16.666316941 +0000 UTC m=+46.481691582" Jul 2 07:57:23.945626 systemd[1]: Started sshd@5-10.128.0.47:22-147.75.109.163:49744.service. Jul 2 07:57:24.238855 sshd[3424]: Accepted publickey for core from 147.75.109.163 port 49744 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:24.241307 sshd[3424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:24.251152 systemd[1]: Started session-6.scope. Jul 2 07:57:24.252443 systemd-logind[1236]: New session 6 of user core. Jul 2 07:57:24.547481 sshd[3424]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:24.552664 systemd[1]: sshd@5-10.128.0.47:22-147.75.109.163:49744.service: Deactivated successfully. Jul 2 07:57:24.553858 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:57:24.555822 systemd-logind[1236]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:57:24.557621 systemd-logind[1236]: Removed session 6. Jul 2 07:57:29.595400 systemd[1]: Started sshd@6-10.128.0.47:22-147.75.109.163:49754.service. Jul 2 07:57:29.893601 sshd[3441]: Accepted publickey for core from 147.75.109.163 port 49754 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:29.895709 sshd[3441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:29.902796 systemd[1]: Started session-7.scope. Jul 2 07:57:29.903642 systemd-logind[1236]: New session 7 of user core. Jul 2 07:57:30.189795 sshd[3441]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:30.195559 systemd[1]: sshd@6-10.128.0.47:22-147.75.109.163:49754.service: Deactivated successfully. Jul 2 07:57:30.196696 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:57:30.197006 systemd-logind[1236]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:57:30.198634 systemd-logind[1236]: Removed session 7. Jul 2 07:57:35.237095 systemd[1]: Started sshd@7-10.128.0.47:22-147.75.109.163:40900.service. Jul 2 07:57:35.530315 sshd[3455]: Accepted publickey for core from 147.75.109.163 port 40900 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:35.532226 sshd[3455]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:35.539556 systemd-logind[1236]: New session 8 of user core. Jul 2 07:57:35.540276 systemd[1]: Started session-8.scope. Jul 2 07:57:35.814382 sshd[3455]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:35.819872 systemd[1]: sshd@7-10.128.0.47:22-147.75.109.163:40900.service: Deactivated successfully. Jul 2 07:57:35.821001 systemd-logind[1236]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:57:35.821799 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:57:35.823062 systemd-logind[1236]: Removed session 8. Jul 2 07:57:40.860945 systemd[1]: Started sshd@8-10.128.0.47:22-147.75.109.163:40916.service. Jul 2 07:57:41.157650 sshd[3470]: Accepted publickey for core from 147.75.109.163 port 40916 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:41.160105 sshd[3470]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:41.167736 systemd[1]: Started session-9.scope. Jul 2 07:57:41.168745 systemd-logind[1236]: New session 9 of user core. Jul 2 07:57:41.451830 sshd[3470]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:41.456894 systemd-logind[1236]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:57:41.457377 systemd[1]: sshd@8-10.128.0.47:22-147.75.109.163:40916.service: Deactivated successfully. Jul 2 07:57:41.458487 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:57:41.460025 systemd-logind[1236]: Removed session 9. Jul 2 07:57:41.498665 systemd[1]: Started sshd@9-10.128.0.47:22-147.75.109.163:40922.service. Jul 2 07:57:41.792205 sshd[3483]: Accepted publickey for core from 147.75.109.163 port 40922 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:41.794493 sshd[3483]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:41.802242 systemd[1]: Started session-10.scope. Jul 2 07:57:41.802864 systemd-logind[1236]: New session 10 of user core. Jul 2 07:57:42.142066 sshd[3483]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:42.148625 systemd[1]: sshd@9-10.128.0.47:22-147.75.109.163:40922.service: Deactivated successfully. Jul 2 07:57:42.149854 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:57:42.150744 systemd-logind[1236]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:57:42.152112 systemd-logind[1236]: Removed session 10. Jul 2 07:57:42.190646 systemd[1]: Started sshd@10-10.128.0.47:22-147.75.109.163:40932.service. Jul 2 07:57:42.488356 sshd[3492]: Accepted publickey for core from 147.75.109.163 port 40932 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:42.490194 sshd[3492]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:42.497342 systemd[1]: Started session-11.scope. Jul 2 07:57:42.498273 systemd-logind[1236]: New session 11 of user core. Jul 2 07:57:42.776330 sshd[3492]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:42.781474 systemd[1]: sshd@10-10.128.0.47:22-147.75.109.163:40932.service: Deactivated successfully. Jul 2 07:57:42.782543 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:57:42.783914 systemd-logind[1236]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:57:42.785400 systemd-logind[1236]: Removed session 11. Jul 2 07:57:47.825507 systemd[1]: Started sshd@11-10.128.0.47:22-147.75.109.163:34144.service. Jul 2 07:57:48.124706 sshd[3504]: Accepted publickey for core from 147.75.109.163 port 34144 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:48.125502 sshd[3504]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:48.134306 systemd[1]: Started session-12.scope. Jul 2 07:57:48.134948 systemd-logind[1236]: New session 12 of user core. Jul 2 07:57:48.417416 sshd[3504]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:48.422680 systemd[1]: sshd@11-10.128.0.47:22-147.75.109.163:34144.service: Deactivated successfully. Jul 2 07:57:48.423704 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:57:48.424318 systemd-logind[1236]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:57:48.425666 systemd-logind[1236]: Removed session 12. Jul 2 07:57:53.462913 systemd[1]: Started sshd@12-10.128.0.47:22-147.75.109.163:39470.service. Jul 2 07:57:53.752748 sshd[3518]: Accepted publickey for core from 147.75.109.163 port 39470 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:53.754929 sshd[3518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:53.762032 systemd[1]: Started session-13.scope. Jul 2 07:57:53.762695 systemd-logind[1236]: New session 13 of user core. Jul 2 07:57:54.035830 sshd[3518]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:54.041029 systemd[1]: sshd@12-10.128.0.47:22-147.75.109.163:39470.service: Deactivated successfully. Jul 2 07:57:54.042223 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:57:54.043404 systemd-logind[1236]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:57:54.044628 systemd-logind[1236]: Removed session 13. Jul 2 07:57:54.082603 systemd[1]: Started sshd@13-10.128.0.47:22-147.75.109.163:39474.service. Jul 2 07:57:54.375033 sshd[3530]: Accepted publickey for core from 147.75.109.163 port 39474 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:54.377093 sshd[3530]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:54.383962 systemd-logind[1236]: New session 14 of user core. Jul 2 07:57:54.384132 systemd[1]: Started session-14.scope. Jul 2 07:57:54.736732 sshd[3530]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:54.741642 systemd[1]: sshd@13-10.128.0.47:22-147.75.109.163:39474.service: Deactivated successfully. Jul 2 07:57:54.742779 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:57:54.743735 systemd-logind[1236]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:57:54.746463 systemd-logind[1236]: Removed session 14. Jul 2 07:57:54.784722 systemd[1]: Started sshd@14-10.128.0.47:22-147.75.109.163:39486.service. Jul 2 07:57:55.083624 sshd[3540]: Accepted publickey for core from 147.75.109.163 port 39486 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:55.085311 sshd[3540]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:55.092404 systemd[1]: Started session-15.scope. Jul 2 07:57:55.093590 systemd-logind[1236]: New session 15 of user core. Jul 2 07:57:56.914943 sshd[3540]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:56.920594 systemd[1]: sshd@14-10.128.0.47:22-147.75.109.163:39486.service: Deactivated successfully. Jul 2 07:57:56.921801 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:57:56.922526 systemd-logind[1236]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:57:56.925651 systemd-logind[1236]: Removed session 15. Jul 2 07:57:56.960509 systemd[1]: Started sshd@15-10.128.0.47:22-147.75.109.163:39498.service. Jul 2 07:57:57.254833 sshd[3557]: Accepted publickey for core from 147.75.109.163 port 39498 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:57.257108 sshd[3557]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:57.264497 systemd-logind[1236]: New session 16 of user core. Jul 2 07:57:57.265576 systemd[1]: Started session-16.scope. Jul 2 07:57:57.675922 sshd[3557]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:57.682300 systemd[1]: sshd@15-10.128.0.47:22-147.75.109.163:39498.service: Deactivated successfully. Jul 2 07:57:57.683530 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:57:57.683810 systemd-logind[1236]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:57:57.685400 systemd-logind[1236]: Removed session 16. Jul 2 07:57:57.721978 systemd[1]: Started sshd@16-10.128.0.47:22-147.75.109.163:39510.service. Jul 2 07:57:58.014515 sshd[3567]: Accepted publickey for core from 147.75.109.163 port 39510 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:57:58.016084 sshd[3567]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:57:58.023623 systemd[1]: Started session-17.scope. Jul 2 07:57:58.024607 systemd-logind[1236]: New session 17 of user core. Jul 2 07:57:58.302430 sshd[3567]: pam_unix(sshd:session): session closed for user core Jul 2 07:57:58.307202 systemd-logind[1236]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:57:58.307664 systemd[1]: sshd@16-10.128.0.47:22-147.75.109.163:39510.service: Deactivated successfully. Jul 2 07:57:58.308826 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:57:58.310394 systemd-logind[1236]: Removed session 17. Jul 2 07:58:03.350599 systemd[1]: Started sshd@17-10.128.0.47:22-147.75.109.163:47186.service. Jul 2 07:58:03.647254 sshd[3579]: Accepted publickey for core from 147.75.109.163 port 47186 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:03.649283 sshd[3579]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:03.656682 systemd[1]: Started session-18.scope. Jul 2 07:58:03.657652 systemd-logind[1236]: New session 18 of user core. Jul 2 07:58:03.933934 sshd[3579]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:03.938832 systemd[1]: sshd@17-10.128.0.47:22-147.75.109.163:47186.service: Deactivated successfully. Jul 2 07:58:03.940014 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:58:03.941516 systemd-logind[1236]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:58:03.942956 systemd-logind[1236]: Removed session 18. Jul 2 07:58:08.980863 systemd[1]: Started sshd@18-10.128.0.47:22-147.75.109.163:47196.service. Jul 2 07:58:09.275173 sshd[3594]: Accepted publickey for core from 147.75.109.163 port 47196 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:09.277013 sshd[3594]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:09.284528 systemd[1]: Started session-19.scope. Jul 2 07:58:09.285459 systemd-logind[1236]: New session 19 of user core. Jul 2 07:58:09.558644 sshd[3594]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:09.563733 systemd[1]: sshd@18-10.128.0.47:22-147.75.109.163:47196.service: Deactivated successfully. Jul 2 07:58:09.565037 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:58:09.566044 systemd-logind[1236]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:58:09.567585 systemd-logind[1236]: Removed session 19. Jul 2 07:58:14.605749 systemd[1]: Started sshd@19-10.128.0.47:22-147.75.109.163:54514.service. Jul 2 07:58:14.897838 sshd[3606]: Accepted publickey for core from 147.75.109.163 port 54514 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:14.899827 sshd[3606]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:14.907555 systemd[1]: Started session-20.scope. Jul 2 07:58:14.908681 systemd-logind[1236]: New session 20 of user core. Jul 2 07:58:15.184277 sshd[3606]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:15.189329 systemd-logind[1236]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:58:15.189771 systemd[1]: sshd@19-10.128.0.47:22-147.75.109.163:54514.service: Deactivated successfully. Jul 2 07:58:15.191011 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:58:15.192715 systemd-logind[1236]: Removed session 20. Jul 2 07:58:20.231075 systemd[1]: Started sshd@20-10.128.0.47:22-147.75.109.163:54522.service. Jul 2 07:58:20.529515 sshd[3620]: Accepted publickey for core from 147.75.109.163 port 54522 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:20.531985 sshd[3620]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:20.541335 systemd[1]: Started session-21.scope. Jul 2 07:58:20.541977 systemd-logind[1236]: New session 21 of user core. Jul 2 07:58:20.811606 sshd[3620]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:20.816871 systemd[1]: sshd@20-10.128.0.47:22-147.75.109.163:54522.service: Deactivated successfully. Jul 2 07:58:20.818035 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:58:20.819072 systemd-logind[1236]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:58:20.820439 systemd-logind[1236]: Removed session 21. Jul 2 07:58:20.857534 systemd[1]: Started sshd@21-10.128.0.47:22-147.75.109.163:54524.service. Jul 2 07:58:21.146674 sshd[3631]: Accepted publickey for core from 147.75.109.163 port 54524 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:21.148688 sshd[3631]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:21.155610 systemd[1]: Started session-22.scope. Jul 2 07:58:21.156603 systemd-logind[1236]: New session 22 of user core. Jul 2 07:58:22.986843 env[1227]: time="2024-07-02T07:58:22.986781089Z" level=info msg="StopContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" with timeout 30 (s)" Jul 2 07:58:22.988143 env[1227]: time="2024-07-02T07:58:22.988092499Z" level=info msg="Stop container \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" with signal terminated" Jul 2 07:58:23.018254 systemd[1]: cri-containerd-cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50.scope: Deactivated successfully. Jul 2 07:58:23.040069 env[1227]: time="2024-07-02T07:58:23.039969334Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:58:23.051378 env[1227]: time="2024-07-02T07:58:23.051320826Z" level=info msg="StopContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" with timeout 2 (s)" Jul 2 07:58:23.052078 env[1227]: time="2024-07-02T07:58:23.052037744Z" level=info msg="Stop container \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" with signal terminated" Jul 2 07:58:23.067654 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50-rootfs.mount: Deactivated successfully. Jul 2 07:58:23.077594 systemd-networkd[1029]: lxc_health: Link DOWN Jul 2 07:58:23.077619 systemd-networkd[1029]: lxc_health: Lost carrier Jul 2 07:58:23.099735 systemd[1]: cri-containerd-33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85.scope: Deactivated successfully. Jul 2 07:58:23.100136 systemd[1]: cri-containerd-33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85.scope: Consumed 9.816s CPU time. Jul 2 07:58:23.107629 env[1227]: time="2024-07-02T07:58:23.107562321Z" level=info msg="shim disconnected" id=cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50 Jul 2 07:58:23.108305 env[1227]: time="2024-07-02T07:58:23.108238636Z" level=warning msg="cleaning up after shim disconnected" id=cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50 namespace=k8s.io Jul 2 07:58:23.108477 env[1227]: time="2024-07-02T07:58:23.108451509Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.143633 env[1227]: time="2024-07-02T07:58:23.143552852Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3689 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.150030 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85-rootfs.mount: Deactivated successfully. Jul 2 07:58:23.152917 env[1227]: time="2024-07-02T07:58:23.152803567Z" level=info msg="StopContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" returns successfully" Jul 2 07:58:23.154530 env[1227]: time="2024-07-02T07:58:23.154485172Z" level=info msg="StopPodSandbox for \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\"" Jul 2 07:58:23.161201 env[1227]: time="2024-07-02T07:58:23.154869613Z" level=info msg="Container to stop \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.158641 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772-shm.mount: Deactivated successfully. Jul 2 07:58:23.164470 env[1227]: time="2024-07-02T07:58:23.164414498Z" level=info msg="shim disconnected" id=33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85 Jul 2 07:58:23.164790 env[1227]: time="2024-07-02T07:58:23.164759035Z" level=warning msg="cleaning up after shim disconnected" id=33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85 namespace=k8s.io Jul 2 07:58:23.164980 env[1227]: time="2024-07-02T07:58:23.164955213Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.177040 systemd[1]: cri-containerd-3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772.scope: Deactivated successfully. Jul 2 07:58:23.189002 env[1227]: time="2024-07-02T07:58:23.188839833Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3711 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.192331 env[1227]: time="2024-07-02T07:58:23.192277889Z" level=info msg="StopContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" returns successfully" Jul 2 07:58:23.193415 env[1227]: time="2024-07-02T07:58:23.193368587Z" level=info msg="StopPodSandbox for \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\"" Jul 2 07:58:23.193742 env[1227]: time="2024-07-02T07:58:23.193704286Z" level=info msg="Container to stop \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.194246 env[1227]: time="2024-07-02T07:58:23.194180869Z" level=info msg="Container to stop \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.194417 env[1227]: time="2024-07-02T07:58:23.194389222Z" level=info msg="Container to stop \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.194534 env[1227]: time="2024-07-02T07:58:23.194509140Z" level=info msg="Container to stop \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.194653 env[1227]: time="2024-07-02T07:58:23.194625457Z" level=info msg="Container to stop \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:23.198350 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0-shm.mount: Deactivated successfully. Jul 2 07:58:23.211042 systemd[1]: cri-containerd-7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0.scope: Deactivated successfully. Jul 2 07:58:23.234166 env[1227]: time="2024-07-02T07:58:23.234094574Z" level=info msg="shim disconnected" id=3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772 Jul 2 07:58:23.234810 env[1227]: time="2024-07-02T07:58:23.234768939Z" level=warning msg="cleaning up after shim disconnected" id=3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772 namespace=k8s.io Jul 2 07:58:23.235215 env[1227]: time="2024-07-02T07:58:23.235186735Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.253761 env[1227]: time="2024-07-02T07:58:23.251865626Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3752 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.254540 env[1227]: time="2024-07-02T07:58:23.254488655Z" level=info msg="TearDown network for sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" successfully" Jul 2 07:58:23.254782 env[1227]: time="2024-07-02T07:58:23.254741795Z" level=info msg="StopPodSandbox for \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" returns successfully" Jul 2 07:58:23.283428 env[1227]: time="2024-07-02T07:58:23.283364300Z" level=info msg="shim disconnected" id=7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0 Jul 2 07:58:23.283986 env[1227]: time="2024-07-02T07:58:23.283947489Z" level=warning msg="cleaning up after shim disconnected" id=7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0 namespace=k8s.io Jul 2 07:58:23.283986 env[1227]: time="2024-07-02T07:58:23.283982176Z" level=info msg="cleaning up dead shim" Jul 2 07:58:23.298007 env[1227]: time="2024-07-02T07:58:23.297934175Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3772 runtime=io.containerd.runc.v2\n" Jul 2 07:58:23.298481 env[1227]: time="2024-07-02T07:58:23.298437251Z" level=info msg="TearDown network for sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" successfully" Jul 2 07:58:23.298717 env[1227]: time="2024-07-02T07:58:23.298518122Z" level=info msg="StopPodSandbox for \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" returns successfully" Jul 2 07:58:23.371579 kubelet[2081]: I0702 07:58:23.371519 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00977a18-a311-4fc8-b6e5-93e3844870c6-cilium-config-path\") pod \"00977a18-a311-4fc8-b6e5-93e3844870c6\" (UID: \"00977a18-a311-4fc8-b6e5-93e3844870c6\") " Jul 2 07:58:23.372490 kubelet[2081]: I0702 07:58:23.372457 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-hubble-tls\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.372643 kubelet[2081]: I0702 07:58:23.372622 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cni-path\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.372775 kubelet[2081]: I0702 07:58:23.372756 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a80c27-3fc3-4d84-920e-71d443f5afc0-clustermesh-secrets\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.372932 kubelet[2081]: I0702 07:58:23.372909 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-xtables-lock\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.373080 kubelet[2081]: I0702 07:58:23.373059 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-etc-cni-netd\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.373203 kubelet[2081]: I0702 07:58:23.373184 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-lib-modules\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.373317 kubelet[2081]: I0702 07:58:23.373299 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-bpf-maps\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.373446 kubelet[2081]: I0702 07:58:23.373425 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lp6cz\" (UniqueName: \"kubernetes.io/projected/00977a18-a311-4fc8-b6e5-93e3844870c6-kube-api-access-lp6cz\") pod \"00977a18-a311-4fc8-b6e5-93e3844870c6\" (UID: \"00977a18-a311-4fc8-b6e5-93e3844870c6\") " Jul 2 07:58:23.373571 kubelet[2081]: I0702 07:58:23.373551 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-hostproc\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.373716 kubelet[2081]: I0702 07:58:23.373695 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-config-path\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.376414 kubelet[2081]: I0702 07:58:23.376381 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-kernel\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.376621 kubelet[2081]: I0702 07:58:23.376597 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-run\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.376814 kubelet[2081]: I0702 07:58:23.376324 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/00977a18-a311-4fc8-b6e5-93e3844870c6-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "00977a18-a311-4fc8-b6e5-93e3844870c6" (UID: "00977a18-a311-4fc8-b6e5-93e3844870c6"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:23.377014 kubelet[2081]: I0702 07:58:23.376778 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.377192 kubelet[2081]: I0702 07:58:23.377165 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.377326 kubelet[2081]: I0702 07:58:23.377306 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.377605 kubelet[2081]: I0702 07:58:23.377568 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:23.377705 kubelet[2081]: I0702 07:58:23.377652 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.377705 kubelet[2081]: I0702 07:58:23.377690 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.378513 kubelet[2081]: I0702 07:58:23.378478 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cni-path" (OuterVolumeSpecName: "cni-path") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.379401 kubelet[2081]: I0702 07:58:23.379159 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-hostproc" (OuterVolumeSpecName: "hostproc") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.379623 kubelet[2081]: I0702 07:58:23.379581 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.384175 kubelet[2081]: I0702 07:58:23.384125 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:23.385185 kubelet[2081]: I0702 07:58:23.385141 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00977a18-a311-4fc8-b6e5-93e3844870c6-kube-api-access-lp6cz" (OuterVolumeSpecName: "kube-api-access-lp6cz") pod "00977a18-a311-4fc8-b6e5-93e3844870c6" (UID: "00977a18-a311-4fc8-b6e5-93e3844870c6"). InnerVolumeSpecName "kube-api-access-lp6cz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:23.387285 kubelet[2081]: I0702 07:58:23.387241 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44a80c27-3fc3-4d84-920e-71d443f5afc0-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:23.477902 kubelet[2081]: I0702 07:58:23.477763 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-cgroup\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.478205 kubelet[2081]: I0702 07:58:23.478175 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-net\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.478328 kubelet[2081]: I0702 07:58:23.478224 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v2mdh\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh\") pod \"44a80c27-3fc3-4d84-920e-71d443f5afc0\" (UID: \"44a80c27-3fc3-4d84-920e-71d443f5afc0\") " Jul 2 07:58:23.478328 kubelet[2081]: I0702 07:58:23.478280 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/00977a18-a311-4fc8-b6e5-93e3844870c6-cilium-config-path\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478328 kubelet[2081]: I0702 07:58:23.478301 2081 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-hubble-tls\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478328 kubelet[2081]: I0702 07:58:23.478317 2081 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cni-path\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478334 2081 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-etc-cni-netd\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478350 2081 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-lib-modules\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478373 2081 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/44a80c27-3fc3-4d84-920e-71d443f5afc0-clustermesh-secrets\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478390 2081 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-xtables-lock\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478406 2081 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-bpf-maps\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478422 2081 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-hostproc\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478564 kubelet[2081]: I0702 07:58:23.478439 2081 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-lp6cz\" (UniqueName: \"kubernetes.io/projected/00977a18-a311-4fc8-b6e5-93e3844870c6-kube-api-access-lp6cz\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478952 kubelet[2081]: I0702 07:58:23.478495 2081 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-kernel\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478952 kubelet[2081]: I0702 07:58:23.478521 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-run\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.478952 kubelet[2081]: I0702 07:58:23.478540 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-config-path\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.479186 kubelet[2081]: I0702 07:58:23.477844 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.479389 kubelet[2081]: I0702 07:58:23.479351 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:23.483013 kubelet[2081]: I0702 07:58:23.482966 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh" (OuterVolumeSpecName: "kube-api-access-v2mdh") pod "44a80c27-3fc3-4d84-920e-71d443f5afc0" (UID: "44a80c27-3fc3-4d84-920e-71d443f5afc0"). InnerVolumeSpecName "kube-api-access-v2mdh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:23.579762 kubelet[2081]: I0702 07:58:23.579606 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-cilium-cgroup\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.579762 kubelet[2081]: I0702 07:58:23.579654 2081 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/44a80c27-3fc3-4d84-920e-71d443f5afc0-host-proc-sys-net\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.579762 kubelet[2081]: I0702 07:58:23.579675 2081 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-v2mdh\" (UniqueName: \"kubernetes.io/projected/44a80c27-3fc3-4d84-920e-71d443f5afc0-kube-api-access-v2mdh\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:23.797926 kubelet[2081]: I0702 07:58:23.797574 2081 scope.go:117] "RemoveContainer" containerID="cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50" Jul 2 07:58:23.803825 env[1227]: time="2024-07-02T07:58:23.803736038Z" level=info msg="RemoveContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\"" Jul 2 07:58:23.806075 systemd[1]: Removed slice kubepods-besteffort-pod00977a18_a311_4fc8_b6e5_93e3844870c6.slice. Jul 2 07:58:23.813705 env[1227]: time="2024-07-02T07:58:23.813617660Z" level=info msg="RemoveContainer for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" returns successfully" Jul 2 07:58:23.815941 kubelet[2081]: I0702 07:58:23.815877 2081 scope.go:117] "RemoveContainer" containerID="cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50" Jul 2 07:58:23.816770 systemd[1]: Removed slice kubepods-burstable-pod44a80c27_3fc3_4d84_920e_71d443f5afc0.slice. Jul 2 07:58:23.816980 systemd[1]: kubepods-burstable-pod44a80c27_3fc3_4d84_920e_71d443f5afc0.slice: Consumed 9.973s CPU time. Jul 2 07:58:23.819434 env[1227]: time="2024-07-02T07:58:23.819308992Z" level=error msg="ContainerStatus for \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\": not found" Jul 2 07:58:23.820515 kubelet[2081]: E0702 07:58:23.819819 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\": not found" containerID="cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50" Jul 2 07:58:23.820515 kubelet[2081]: I0702 07:58:23.819866 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50"} err="failed to get container status \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\": rpc error: code = NotFound desc = an error occurred when try to find container \"cb3470373089cd7bab7afffb4ac1aca7b8992c05443bd37e12b4f592d60a4f50\": not found" Jul 2 07:58:23.820515 kubelet[2081]: I0702 07:58:23.820004 2081 scope.go:117] "RemoveContainer" containerID="33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85" Jul 2 07:58:23.825911 env[1227]: time="2024-07-02T07:58:23.825845704Z" level=info msg="RemoveContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\"" Jul 2 07:58:23.831808 env[1227]: time="2024-07-02T07:58:23.831084470Z" level=info msg="RemoveContainer for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" returns successfully" Jul 2 07:58:23.833224 kubelet[2081]: I0702 07:58:23.833183 2081 scope.go:117] "RemoveContainer" containerID="dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289" Jul 2 07:58:23.837778 env[1227]: time="2024-07-02T07:58:23.837732711Z" level=info msg="RemoveContainer for \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\"" Jul 2 07:58:23.844060 env[1227]: time="2024-07-02T07:58:23.843981591Z" level=info msg="RemoveContainer for \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\" returns successfully" Jul 2 07:58:23.844403 kubelet[2081]: I0702 07:58:23.844372 2081 scope.go:117] "RemoveContainer" containerID="69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f" Jul 2 07:58:23.846049 env[1227]: time="2024-07-02T07:58:23.845969772Z" level=info msg="RemoveContainer for \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\"" Jul 2 07:58:23.850713 env[1227]: time="2024-07-02T07:58:23.850661483Z" level=info msg="RemoveContainer for \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\" returns successfully" Jul 2 07:58:23.851055 kubelet[2081]: I0702 07:58:23.851023 2081 scope.go:117] "RemoveContainer" containerID="878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361" Jul 2 07:58:23.852754 env[1227]: time="2024-07-02T07:58:23.852697218Z" level=info msg="RemoveContainer for \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\"" Jul 2 07:58:23.857595 env[1227]: time="2024-07-02T07:58:23.857538764Z" level=info msg="RemoveContainer for \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\" returns successfully" Jul 2 07:58:23.857842 kubelet[2081]: I0702 07:58:23.857812 2081 scope.go:117] "RemoveContainer" containerID="5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9" Jul 2 07:58:23.859584 env[1227]: time="2024-07-02T07:58:23.859527371Z" level=info msg="RemoveContainer for \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\"" Jul 2 07:58:23.863831 env[1227]: time="2024-07-02T07:58:23.863773733Z" level=info msg="RemoveContainer for \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\" returns successfully" Jul 2 07:58:23.864212 kubelet[2081]: I0702 07:58:23.864082 2081 scope.go:117] "RemoveContainer" containerID="33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85" Jul 2 07:58:23.864467 env[1227]: time="2024-07-02T07:58:23.864385569Z" level=error msg="ContainerStatus for \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\": not found" Jul 2 07:58:23.864619 kubelet[2081]: E0702 07:58:23.864582 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\": not found" containerID="33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85" Jul 2 07:58:23.864726 kubelet[2081]: I0702 07:58:23.864619 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85"} err="failed to get container status \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\": rpc error: code = NotFound desc = an error occurred when try to find container \"33d652723da8e33e033c05f76bc1beefbcd4691a8e37dccbb9bab8e6e1498f85\": not found" Jul 2 07:58:23.864726 kubelet[2081]: I0702 07:58:23.864653 2081 scope.go:117] "RemoveContainer" containerID="dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289" Jul 2 07:58:23.865073 env[1227]: time="2024-07-02T07:58:23.864973089Z" level=error msg="ContainerStatus for \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\": not found" Jul 2 07:58:23.865509 kubelet[2081]: E0702 07:58:23.865297 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\": not found" containerID="dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289" Jul 2 07:58:23.865509 kubelet[2081]: I0702 07:58:23.865334 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289"} err="failed to get container status \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\": rpc error: code = NotFound desc = an error occurred when try to find container \"dcb26c963a61ca005250574035196a54baf90171d0561c3c693533ae40ced289\": not found" Jul 2 07:58:23.865509 kubelet[2081]: I0702 07:58:23.865363 2081 scope.go:117] "RemoveContainer" containerID="69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f" Jul 2 07:58:23.865764 env[1227]: time="2024-07-02T07:58:23.865677875Z" level=error msg="ContainerStatus for \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\": not found" Jul 2 07:58:23.865934 kubelet[2081]: E0702 07:58:23.865876 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\": not found" containerID="69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f" Jul 2 07:58:23.866017 kubelet[2081]: I0702 07:58:23.865935 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f"} err="failed to get container status \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\": rpc error: code = NotFound desc = an error occurred when try to find container \"69890699f36c52c8340f57d7ca5a4b8453423f40b02d1517090d0177e29f8a3f\": not found" Jul 2 07:58:23.866017 kubelet[2081]: I0702 07:58:23.865961 2081 scope.go:117] "RemoveContainer" containerID="878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361" Jul 2 07:58:23.866325 env[1227]: time="2024-07-02T07:58:23.866244028Z" level=error msg="ContainerStatus for \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\": not found" Jul 2 07:58:23.866466 kubelet[2081]: E0702 07:58:23.866436 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\": not found" containerID="878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361" Jul 2 07:58:23.866557 kubelet[2081]: I0702 07:58:23.866473 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361"} err="failed to get container status \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\": rpc error: code = NotFound desc = an error occurred when try to find container \"878e396cb51deaf917d38fabc52661191e65ff5bcd64e7fcbfba4962ccafb361\": not found" Jul 2 07:58:23.866557 kubelet[2081]: I0702 07:58:23.866501 2081 scope.go:117] "RemoveContainer" containerID="5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9" Jul 2 07:58:23.866844 env[1227]: time="2024-07-02T07:58:23.866759458Z" level=error msg="ContainerStatus for \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\": not found" Jul 2 07:58:23.867003 kubelet[2081]: E0702 07:58:23.866970 2081 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\": not found" containerID="5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9" Jul 2 07:58:23.867086 kubelet[2081]: I0702 07:58:23.866998 2081 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9"} err="failed to get container status \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b3e0770979d01a6167093a8648bf62059ea66461f46471b506ad39e5672a5e9\": not found" Jul 2 07:58:24.007421 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0-rootfs.mount: Deactivated successfully. Jul 2 07:58:24.007576 systemd[1]: var-lib-kubelet-pods-44a80c27\x2d3fc3\x2d4d84\x2d920e\x2d71d443f5afc0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv2mdh.mount: Deactivated successfully. Jul 2 07:58:24.007682 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772-rootfs.mount: Deactivated successfully. Jul 2 07:58:24.007781 systemd[1]: var-lib-kubelet-pods-00977a18\x2da311\x2d4fc8\x2db6e5\x2d93e3844870c6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlp6cz.mount: Deactivated successfully. Jul 2 07:58:24.007945 systemd[1]: var-lib-kubelet-pods-44a80c27\x2d3fc3\x2d4d84\x2d920e\x2d71d443f5afc0-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:24.008068 systemd[1]: var-lib-kubelet-pods-44a80c27\x2d3fc3\x2d4d84\x2d920e\x2d71d443f5afc0-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:24.331856 kubelet[2081]: I0702 07:58:24.331804 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00977a18-a311-4fc8-b6e5-93e3844870c6" path="/var/lib/kubelet/pods/00977a18-a311-4fc8-b6e5-93e3844870c6/volumes" Jul 2 07:58:24.332640 kubelet[2081]: I0702 07:58:24.332601 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" path="/var/lib/kubelet/pods/44a80c27-3fc3-4d84-920e-71d443f5afc0/volumes" Jul 2 07:58:24.972745 sshd[3631]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:24.977665 systemd[1]: sshd@21-10.128.0.47:22-147.75.109.163:54524.service: Deactivated successfully. Jul 2 07:58:24.978728 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:58:24.978959 systemd-logind[1236]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:58:24.979746 systemd[1]: session-22.scope: Consumed 1.073s CPU time. Jul 2 07:58:24.980674 systemd-logind[1236]: Removed session 22. Jul 2 07:58:25.020294 systemd[1]: Started sshd@22-10.128.0.47:22-147.75.109.163:37008.service. Jul 2 07:58:25.313917 sshd[3791]: Accepted publickey for core from 147.75.109.163 port 37008 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:25.316323 sshd[3791]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:25.323653 systemd[1]: Started session-23.scope. Jul 2 07:58:25.324586 systemd-logind[1236]: New session 23 of user core. Jul 2 07:58:25.476470 kubelet[2081]: E0702 07:58:25.476383 2081 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:26.230338 kubelet[2081]: I0702 07:58:26.230277 2081 topology_manager.go:215] "Topology Admit Handler" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" podNamespace="kube-system" podName="cilium-nmqm5" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230363 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="apply-sysctl-overwrites" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230380 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="mount-bpf-fs" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230390 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="clean-cilium-state" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230401 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="00977a18-a311-4fc8-b6e5-93e3844870c6" containerName="cilium-operator" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230410 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="mount-cgroup" Jul 2 07:58:26.230576 kubelet[2081]: E0702 07:58:26.230419 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="cilium-agent" Jul 2 07:58:26.230576 kubelet[2081]: I0702 07:58:26.230453 2081 memory_manager.go:354] "RemoveStaleState removing state" podUID="00977a18-a311-4fc8-b6e5-93e3844870c6" containerName="cilium-operator" Jul 2 07:58:26.230576 kubelet[2081]: I0702 07:58:26.230463 2081 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a80c27-3fc3-4d84-920e-71d443f5afc0" containerName="cilium-agent" Jul 2 07:58:26.239573 systemd[1]: Created slice kubepods-burstable-pod781c57d8_de2b_4ed9_8693_178e0ba41408.slice. Jul 2 07:58:26.253990 sshd[3791]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:26.259085 systemd[1]: sshd@22-10.128.0.47:22-147.75.109.163:37008.service: Deactivated successfully. Jul 2 07:58:26.260268 systemd[1]: session-23.scope: Deactivated successfully. Jul 2 07:58:26.260939 systemd-logind[1236]: Session 23 logged out. Waiting for processes to exit. Jul 2 07:58:26.262243 systemd-logind[1236]: Removed session 23. Jul 2 07:58:26.300864 systemd[1]: Started sshd@23-10.128.0.47:22-147.75.109.163:37010.service. Jul 2 07:58:26.306417 kubelet[2081]: I0702 07:58:26.306380 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-hostproc\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.306687 kubelet[2081]: I0702 07:58:26.306652 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cc2dv\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-kube-api-access-cc2dv\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.306833 kubelet[2081]: I0702 07:58:26.306813 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cni-path\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307001 kubelet[2081]: I0702 07:58:26.306976 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-etc-cni-netd\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307160 kubelet[2081]: I0702 07:58:26.307139 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-config-path\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307434 kubelet[2081]: I0702 07:58:26.307410 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-net\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307602 kubelet[2081]: I0702 07:58:26.307581 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-kernel\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307744 kubelet[2081]: I0702 07:58:26.307724 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-hubble-tls\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.307897 kubelet[2081]: I0702 07:58:26.307862 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-run\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308056 kubelet[2081]: I0702 07:58:26.308031 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-bpf-maps\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308233 kubelet[2081]: I0702 07:58:26.308212 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-xtables-lock\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308383 kubelet[2081]: I0702 07:58:26.308365 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-lib-modules\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308538 kubelet[2081]: I0702 07:58:26.308513 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-clustermesh-secrets\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308697 kubelet[2081]: I0702 07:58:26.308677 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-cgroup\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.308849 kubelet[2081]: I0702 07:58:26.308829 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-ipsec-secrets\") pod \"cilium-nmqm5\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " pod="kube-system/cilium-nmqm5" Jul 2 07:58:26.329193 kubelet[2081]: E0702 07:58:26.329140 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-5lz68" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" Jul 2 07:58:26.546081 env[1227]: time="2024-07-02T07:58:26.545870915Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmqm5,Uid:781c57d8-de2b-4ed9-8693-178e0ba41408,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:26.569353 env[1227]: time="2024-07-02T07:58:26.569266537Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:26.569630 env[1227]: time="2024-07-02T07:58:26.569589117Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:26.569807 env[1227]: time="2024-07-02T07:58:26.569769594Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:26.570183 env[1227]: time="2024-07-02T07:58:26.570134117Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71 pid=3815 runtime=io.containerd.runc.v2 Jul 2 07:58:26.591835 systemd[1]: Started cri-containerd-410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71.scope. Jul 2 07:58:26.613902 sshd[3801]: Accepted publickey for core from 147.75.109.163 port 37010 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:26.613692 sshd[3801]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:26.621479 systemd[1]: Started session-24.scope. Jul 2 07:58:26.622357 systemd-logind[1236]: New session 24 of user core. Jul 2 07:58:26.646217 env[1227]: time="2024-07-02T07:58:26.646110505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-nmqm5,Uid:781c57d8-de2b-4ed9-8693-178e0ba41408,Namespace:kube-system,Attempt:0,} returns sandbox id \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\"" Jul 2 07:58:26.653776 env[1227]: time="2024-07-02T07:58:26.653725843Z" level=info msg="CreateContainer within sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:26.670176 env[1227]: time="2024-07-02T07:58:26.670108059Z" level=info msg="CreateContainer within sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\"" Jul 2 07:58:26.672617 env[1227]: time="2024-07-02T07:58:26.671900442Z" level=info msg="StartContainer for \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\"" Jul 2 07:58:26.702064 systemd[1]: Started cri-containerd-1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831.scope. Jul 2 07:58:26.715791 systemd[1]: cri-containerd-1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831.scope: Deactivated successfully. Jul 2 07:58:26.740232 env[1227]: time="2024-07-02T07:58:26.740157800Z" level=info msg="shim disconnected" id=1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831 Jul 2 07:58:26.740532 env[1227]: time="2024-07-02T07:58:26.740233978Z" level=warning msg="cleaning up after shim disconnected" id=1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831 namespace=k8s.io Jul 2 07:58:26.740532 env[1227]: time="2024-07-02T07:58:26.740295947Z" level=info msg="cleaning up dead shim" Jul 2 07:58:26.753117 env[1227]: time="2024-07-02T07:58:26.753048560Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3876 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:26.753559 env[1227]: time="2024-07-02T07:58:26.753414685Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Jul 2 07:58:26.757059 env[1227]: time="2024-07-02T07:58:26.756991925Z" level=error msg="Failed to pipe stderr of container \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\"" error="reading from a closed fifo" Jul 2 07:58:26.757256 env[1227]: time="2024-07-02T07:58:26.757034643Z" level=error msg="Failed to pipe stdout of container \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\"" error="reading from a closed fifo" Jul 2 07:58:26.759925 env[1227]: time="2024-07-02T07:58:26.759823419Z" level=error msg="StartContainer for \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:26.760245 kubelet[2081]: E0702 07:58:26.760175 2081 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831" Jul 2 07:58:26.760744 kubelet[2081]: E0702 07:58:26.760395 2081 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:26.760744 kubelet[2081]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:26.760744 kubelet[2081]: rm /hostbin/cilium-mount Jul 2 07:58:26.762186 kubelet[2081]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc2dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-nmqm5_kube-system(781c57d8-de2b-4ed9-8693-178e0ba41408): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:26.762186 kubelet[2081]: E0702 07:58:26.760443 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmqm5" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" Jul 2 07:58:26.835210 env[1227]: time="2024-07-02T07:58:26.835049504Z" level=info msg="CreateContainer within sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Jul 2 07:58:26.875208 env[1227]: time="2024-07-02T07:58:26.875137519Z" level=info msg="CreateContainer within sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\"" Jul 2 07:58:26.877290 env[1227]: time="2024-07-02T07:58:26.877242135Z" level=info msg="StartContainer for \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\"" Jul 2 07:58:26.902860 systemd[1]: Started cri-containerd-e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037.scope. Jul 2 07:58:26.937841 systemd[1]: cri-containerd-e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037.scope: Deactivated successfully. Jul 2 07:58:26.955434 env[1227]: time="2024-07-02T07:58:26.955362194Z" level=info msg="shim disconnected" id=e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037 Jul 2 07:58:26.955936 env[1227]: time="2024-07-02T07:58:26.955872577Z" level=warning msg="cleaning up after shim disconnected" id=e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037 namespace=k8s.io Jul 2 07:58:26.956125 env[1227]: time="2024-07-02T07:58:26.956101229Z" level=info msg="cleaning up dead shim" Jul 2 07:58:26.976027 env[1227]: time="2024-07-02T07:58:26.975916235Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3922 runtime=io.containerd.runc.v2\ntime=\"2024-07-02T07:58:26Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Jul 2 07:58:26.976745 env[1227]: time="2024-07-02T07:58:26.976655776Z" level=error msg="copy shim log" error="read /proc/self/fd/30: file already closed" Jul 2 07:58:26.980387 env[1227]: time="2024-07-02T07:58:26.977999062Z" level=error msg="Failed to pipe stdout of container \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\"" error="reading from a closed fifo" Jul 2 07:58:26.980670 env[1227]: time="2024-07-02T07:58:26.979970344Z" level=error msg="Failed to pipe stderr of container \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\"" error="reading from a closed fifo" Jul 2 07:58:26.982866 env[1227]: time="2024-07-02T07:58:26.982800250Z" level=error msg="StartContainer for \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Jul 2 07:58:26.984004 kubelet[2081]: E0702 07:58:26.983386 2081 remote_runtime.go:343] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037" Jul 2 07:58:26.984004 kubelet[2081]: E0702 07:58:26.983564 2081 kuberuntime_manager.go:1256] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Jul 2 07:58:26.984004 kubelet[2081]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Jul 2 07:58:26.984004 kubelet[2081]: rm /hostbin/cilium-mount Jul 2 07:58:26.984004 kubelet[2081]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-cc2dv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:&AppArmorProfile{Type:Unconfined,LocalhostProfile:nil,},},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cilium-nmqm5_kube-system(781c57d8-de2b-4ed9-8693-178e0ba41408): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Jul 2 07:58:26.984004 kubelet[2081]: E0702 07:58:26.983605 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-nmqm5" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" Jul 2 07:58:26.995511 sshd[3801]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:27.001361 systemd[1]: sshd@23-10.128.0.47:22-147.75.109.163:37010.service: Deactivated successfully. Jul 2 07:58:27.002514 systemd[1]: session-24.scope: Deactivated successfully. Jul 2 07:58:27.003425 systemd-logind[1236]: Session 24 logged out. Waiting for processes to exit. Jul 2 07:58:27.004931 systemd-logind[1236]: Removed session 24. Jul 2 07:58:27.044057 systemd[1]: Started sshd@24-10.128.0.47:22-147.75.109.163:37014.service. Jul 2 07:58:27.341325 sshd[3936]: Accepted publickey for core from 147.75.109.163 port 37014 ssh2: RSA SHA256:GSxC+U3gD/L2tgNRotlYHTLXvYsmaWMokGyA5lBCl2s Jul 2 07:58:27.343667 sshd[3936]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:58:27.351432 systemd[1]: Started session-25.scope. Jul 2 07:58:27.352936 systemd-logind[1236]: New session 25 of user core. Jul 2 07:58:27.824412 kubelet[2081]: I0702 07:58:27.824378 2081 scope.go:117] "RemoveContainer" containerID="1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831" Jul 2 07:58:27.830861 env[1227]: time="2024-07-02T07:58:27.825375159Z" level=info msg="StopPodSandbox for \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\"" Jul 2 07:58:27.830861 env[1227]: time="2024-07-02T07:58:27.825451179Z" level=info msg="Container to stop \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:27.830861 env[1227]: time="2024-07-02T07:58:27.825473253Z" level=info msg="Container to stop \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Jul 2 07:58:27.828914 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71-shm.mount: Deactivated successfully. Jul 2 07:58:27.839579 systemd[1]: cri-containerd-410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71.scope: Deactivated successfully. Jul 2 07:58:27.843706 env[1227]: time="2024-07-02T07:58:27.843656569Z" level=info msg="RemoveContainer for \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\"" Jul 2 07:58:27.849849 env[1227]: time="2024-07-02T07:58:27.849792090Z" level=info msg="RemoveContainer for \"1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831\" returns successfully" Jul 2 07:58:27.881505 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71-rootfs.mount: Deactivated successfully. Jul 2 07:58:27.888572 env[1227]: time="2024-07-02T07:58:27.888507336Z" level=info msg="shim disconnected" id=410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71 Jul 2 07:58:27.889443 env[1227]: time="2024-07-02T07:58:27.889399977Z" level=warning msg="cleaning up after shim disconnected" id=410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71 namespace=k8s.io Jul 2 07:58:27.889608 env[1227]: time="2024-07-02T07:58:27.889584364Z" level=info msg="cleaning up dead shim" Jul 2 07:58:27.901867 env[1227]: time="2024-07-02T07:58:27.901799268Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3963 runtime=io.containerd.runc.v2\n" Jul 2 07:58:27.902332 env[1227]: time="2024-07-02T07:58:27.902274708Z" level=info msg="TearDown network for sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" successfully" Jul 2 07:58:27.902332 env[1227]: time="2024-07-02T07:58:27.902313835Z" level=info msg="StopPodSandbox for \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" returns successfully" Jul 2 07:58:28.021694 kubelet[2081]: I0702 07:58:28.021643 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-hubble-tls\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022053 kubelet[2081]: I0702 07:58:28.022024 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-net\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022270 kubelet[2081]: I0702 07:58:28.022242 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-kernel\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022359 kubelet[2081]: I0702 07:58:28.022290 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-ipsec-secrets\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022359 kubelet[2081]: I0702 07:58:28.022322 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-config-path\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022359 kubelet[2081]: I0702 07:58:28.022350 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-bpf-maps\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022377 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-xtables-lock\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022402 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-hostproc\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022426 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cni-path\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022449 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-etc-cni-netd\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022476 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-cgroup\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022504 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cc2dv\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-kube-api-access-cc2dv\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022557 kubelet[2081]: I0702 07:58:28.022533 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-lib-modules\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022994 kubelet[2081]: I0702 07:58:28.022563 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-clustermesh-secrets\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022994 kubelet[2081]: I0702 07:58:28.022591 2081 reconciler_common.go:161] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-run\") pod \"781c57d8-de2b-4ed9-8693-178e0ba41408\" (UID: \"781c57d8-de2b-4ed9-8693-178e0ba41408\") " Jul 2 07:58:28.022994 kubelet[2081]: I0702 07:58:28.022187 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.022994 kubelet[2081]: I0702 07:58:28.022656 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.022994 kubelet[2081]: I0702 07:58:28.022701 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.023778 kubelet[2081]: I0702 07:58:28.023746 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.024024 kubelet[2081]: I0702 07:58:28.023917 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.024024 kubelet[2081]: I0702 07:58:28.023940 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.024024 kubelet[2081]: I0702 07:58:28.023967 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-hostproc" (OuterVolumeSpecName: "hostproc") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.024249 kubelet[2081]: I0702 07:58:28.023983 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cni-path" (OuterVolumeSpecName: "cni-path") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.024411 kubelet[2081]: I0702 07:58:28.024382 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.030413 kubelet[2081]: I0702 07:58:28.030359 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jul 2 07:58:28.030622 kubelet[2081]: I0702 07:58:28.030591 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jul 2 07:58:28.034080 systemd[1]: var-lib-kubelet-pods-781c57d8\x2dde2b\x2d4ed9\x2d8693\x2d178e0ba41408-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Jul 2 07:58:28.040612 systemd[1]: var-lib-kubelet-pods-781c57d8\x2dde2b\x2d4ed9\x2d8693\x2d178e0ba41408-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:28.043286 kubelet[2081]: I0702 07:58:28.043240 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:28.043605 kubelet[2081]: I0702 07:58:28.043504 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:28.043765 kubelet[2081]: I0702 07:58:28.043547 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jul 2 07:58:28.045725 kubelet[2081]: I0702 07:58:28.045690 2081 operation_generator.go:887] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-kube-api-access-cc2dv" (OuterVolumeSpecName: "kube-api-access-cc2dv") pod "781c57d8-de2b-4ed9-8693-178e0ba41408" (UID: "781c57d8-de2b-4ed9-8693-178e0ba41408"). InnerVolumeSpecName "kube-api-access-cc2dv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jul 2 07:58:28.123182 kubelet[2081]: I0702 07:58:28.123138 2081 reconciler_common.go:289] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-hostproc\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.123477 kubelet[2081]: I0702 07:58:28.123456 2081 reconciler_common.go:289] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-bpf-maps\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.123611 kubelet[2081]: I0702 07:58:28.123594 2081 reconciler_common.go:289] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-xtables-lock\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.123762 kubelet[2081]: I0702 07:58:28.123714 2081 reconciler_common.go:289] "Volume detached for volume \"kube-api-access-cc2dv\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-kube-api-access-cc2dv\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.123935 kubelet[2081]: I0702 07:58:28.123875 2081 reconciler_common.go:289] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cni-path\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.124768 kubelet[2081]: I0702 07:58:28.124732 2081 reconciler_common.go:289] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-etc-cni-netd\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125065 kubelet[2081]: I0702 07:58:28.125043 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-cgroup\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125220 kubelet[2081]: I0702 07:58:28.125199 2081 reconciler_common.go:289] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-lib-modules\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125344 kubelet[2081]: I0702 07:58:28.125325 2081 reconciler_common.go:289] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-clustermesh-secrets\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125448 kubelet[2081]: I0702 07:58:28.125431 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-run\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125554 kubelet[2081]: I0702 07:58:28.125536 2081 reconciler_common.go:289] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/781c57d8-de2b-4ed9-8693-178e0ba41408-hubble-tls\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125687 kubelet[2081]: I0702 07:58:28.125659 2081 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-net\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125687 kubelet[2081]: I0702 07:58:28.125681 2081 reconciler_common.go:289] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/781c57d8-de2b-4ed9-8693-178e0ba41408-host-proc-sys-kernel\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125836 kubelet[2081]: I0702 07:58:28.125701 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-config-path\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.125836 kubelet[2081]: I0702 07:58:28.125716 2081 reconciler_common.go:289] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/781c57d8-de2b-4ed9-8693-178e0ba41408-cilium-ipsec-secrets\") on node \"ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal\" DevicePath \"\"" Jul 2 07:58:28.328905 kubelet[2081]: E0702 07:58:28.328808 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-5lz68" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" Jul 2 07:58:28.338506 systemd[1]: Removed slice kubepods-burstable-pod781c57d8_de2b_4ed9_8693_178e0ba41408.slice. Jul 2 07:58:28.420637 systemd[1]: var-lib-kubelet-pods-781c57d8\x2dde2b\x2d4ed9\x2d8693\x2d178e0ba41408-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcc2dv.mount: Deactivated successfully. Jul 2 07:58:28.420801 systemd[1]: var-lib-kubelet-pods-781c57d8\x2dde2b\x2d4ed9\x2d8693\x2d178e0ba41408-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Jul 2 07:58:28.828347 kubelet[2081]: I0702 07:58:28.828137 2081 scope.go:117] "RemoveContainer" containerID="e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037" Jul 2 07:58:28.831658 env[1227]: time="2024-07-02T07:58:28.831221208Z" level=info msg="RemoveContainer for \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\"" Jul 2 07:58:28.838534 env[1227]: time="2024-07-02T07:58:28.838481511Z" level=info msg="RemoveContainer for \"e34ba4653978162c33571d068a6d98e9fee1501b947ff88a3313e8077b330037\" returns successfully" Jul 2 07:58:28.920797 kubelet[2081]: I0702 07:58:28.920750 2081 topology_manager.go:215] "Topology Admit Handler" podUID="046c5b09-70ee-4f34-8abe-f1b1f715dccd" podNamespace="kube-system" podName="cilium-lhn88" Jul 2 07:58:28.921139 kubelet[2081]: E0702 07:58:28.921117 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" containerName="mount-cgroup" Jul 2 07:58:28.921278 kubelet[2081]: E0702 07:58:28.921260 2081 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" containerName="mount-cgroup" Jul 2 07:58:28.921408 kubelet[2081]: I0702 07:58:28.921391 2081 memory_manager.go:354] "RemoveStaleState removing state" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" containerName="mount-cgroup" Jul 2 07:58:28.921517 kubelet[2081]: I0702 07:58:28.921501 2081 memory_manager.go:354] "RemoveStaleState removing state" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" containerName="mount-cgroup" Jul 2 07:58:28.929563 systemd[1]: Created slice kubepods-burstable-pod046c5b09_70ee_4f34_8abe_f1b1f715dccd.slice. Jul 2 07:58:29.035550 kubelet[2081]: I0702 07:58:29.035481 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-etc-cni-netd\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035550 kubelet[2081]: I0702 07:58:29.035543 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/046c5b09-70ee-4f34-8abe-f1b1f715dccd-clustermesh-secrets\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035576 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/046c5b09-70ee-4f34-8abe-f1b1f715dccd-cilium-config-path\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035600 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-bpf-maps\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035625 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-cni-path\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035647 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/046c5b09-70ee-4f34-8abe-f1b1f715dccd-hubble-tls\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035671 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-host-proc-sys-net\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035694 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-hostproc\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035720 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcbd2\" (UniqueName: \"kubernetes.io/projected/046c5b09-70ee-4f34-8abe-f1b1f715dccd-kube-api-access-pcbd2\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035745 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-cilium-run\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035768 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-lib-modules\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.035837 kubelet[2081]: I0702 07:58:29.035793 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-cilium-cgroup\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.036464 kubelet[2081]: I0702 07:58:29.035843 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/046c5b09-70ee-4f34-8abe-f1b1f715dccd-cilium-ipsec-secrets\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.036464 kubelet[2081]: I0702 07:58:29.035870 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-xtables-lock\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.036464 kubelet[2081]: I0702 07:58:29.035915 2081 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/046c5b09-70ee-4f34-8abe-f1b1f715dccd-host-proc-sys-kernel\") pod \"cilium-lhn88\" (UID: \"046c5b09-70ee-4f34-8abe-f1b1f715dccd\") " pod="kube-system/cilium-lhn88" Jul 2 07:58:29.233697 env[1227]: time="2024-07-02T07:58:29.233637670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lhn88,Uid:046c5b09-70ee-4f34-8abe-f1b1f715dccd,Namespace:kube-system,Attempt:0,}" Jul 2 07:58:29.256792 env[1227]: time="2024-07-02T07:58:29.256693902Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:58:29.256792 env[1227]: time="2024-07-02T07:58:29.256754211Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:58:29.257123 env[1227]: time="2024-07-02T07:58:29.256773402Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:58:29.257631 env[1227]: time="2024-07-02T07:58:29.257515481Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9 pid=3993 runtime=io.containerd.runc.v2 Jul 2 07:58:29.275655 systemd[1]: Started cri-containerd-af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9.scope. Jul 2 07:58:29.310921 env[1227]: time="2024-07-02T07:58:29.310834735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-lhn88,Uid:046c5b09-70ee-4f34-8abe-f1b1f715dccd,Namespace:kube-system,Attempt:0,} returns sandbox id \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\"" Jul 2 07:58:29.315184 env[1227]: time="2024-07-02T07:58:29.315070608Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Jul 2 07:58:29.331804 env[1227]: time="2024-07-02T07:58:29.331751530Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b\"" Jul 2 07:58:29.334342 env[1227]: time="2024-07-02T07:58:29.332962490Z" level=info msg="StartContainer for \"2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b\"" Jul 2 07:58:29.359980 systemd[1]: Started cri-containerd-2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b.scope. Jul 2 07:58:29.400404 env[1227]: time="2024-07-02T07:58:29.400310412Z" level=info msg="StartContainer for \"2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b\" returns successfully" Jul 2 07:58:29.412217 systemd[1]: cri-containerd-2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b.scope: Deactivated successfully. Jul 2 07:58:29.444946 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b-rootfs.mount: Deactivated successfully. Jul 2 07:58:29.453160 env[1227]: time="2024-07-02T07:58:29.453088972Z" level=info msg="shim disconnected" id=2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b Jul 2 07:58:29.453160 env[1227]: time="2024-07-02T07:58:29.453145424Z" level=warning msg="cleaning up after shim disconnected" id=2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b namespace=k8s.io Jul 2 07:58:29.453160 env[1227]: time="2024-07-02T07:58:29.453160603Z" level=info msg="cleaning up dead shim" Jul 2 07:58:29.466195 env[1227]: time="2024-07-02T07:58:29.466112848Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4076 runtime=io.containerd.runc.v2\n" Jul 2 07:58:29.838192 env[1227]: time="2024-07-02T07:58:29.838135389Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Jul 2 07:58:29.859962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2022751783.mount: Deactivated successfully. Jul 2 07:58:29.862029 kubelet[2081]: W0702 07:58:29.861731 2081 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod781c57d8_de2b_4ed9_8693_178e0ba41408.slice/cri-containerd-1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831.scope WatchSource:0}: container "1fe15ba7ff1eb55b708e5b0d7264fcf824fd23fce381b757252d5a000b787831" in namespace "k8s.io": not found Jul 2 07:58:29.891227 env[1227]: time="2024-07-02T07:58:29.891170453Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07\"" Jul 2 07:58:29.892279 env[1227]: time="2024-07-02T07:58:29.892238514Z" level=info msg="StartContainer for \"2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07\"" Jul 2 07:58:29.919167 systemd[1]: Started cri-containerd-2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07.scope. Jul 2 07:58:29.967138 env[1227]: time="2024-07-02T07:58:29.967071376Z" level=info msg="StartContainer for \"2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07\" returns successfully" Jul 2 07:58:29.975708 systemd[1]: cri-containerd-2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07.scope: Deactivated successfully. Jul 2 07:58:30.007332 env[1227]: time="2024-07-02T07:58:30.007267703Z" level=info msg="shim disconnected" id=2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07 Jul 2 07:58:30.007332 env[1227]: time="2024-07-02T07:58:30.007334321Z" level=warning msg="cleaning up after shim disconnected" id=2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07 namespace=k8s.io Jul 2 07:58:30.007775 env[1227]: time="2024-07-02T07:58:30.007348628Z" level=info msg="cleaning up dead shim" Jul 2 07:58:30.019903 env[1227]: time="2024-07-02T07:58:30.019831310Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4142 runtime=io.containerd.runc.v2\n" Jul 2 07:58:30.329075 kubelet[2081]: E0702 07:58:30.329022 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-5lz68" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" Jul 2 07:58:30.332385 kubelet[2081]: I0702 07:58:30.332308 2081 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="781c57d8-de2b-4ed9-8693-178e0ba41408" path="/var/lib/kubelet/pods/781c57d8-de2b-4ed9-8693-178e0ba41408/volumes" Jul 2 07:58:30.364991 env[1227]: time="2024-07-02T07:58:30.364936971Z" level=info msg="StopPodSandbox for \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\"" Jul 2 07:58:30.365248 env[1227]: time="2024-07-02T07:58:30.365068850Z" level=info msg="TearDown network for sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" successfully" Jul 2 07:58:30.365248 env[1227]: time="2024-07-02T07:58:30.365117201Z" level=info msg="StopPodSandbox for \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" returns successfully" Jul 2 07:58:30.365630 env[1227]: time="2024-07-02T07:58:30.365587167Z" level=info msg="RemovePodSandbox for \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\"" Jul 2 07:58:30.365754 env[1227]: time="2024-07-02T07:58:30.365633069Z" level=info msg="Forcibly stopping sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\"" Jul 2 07:58:30.365818 env[1227]: time="2024-07-02T07:58:30.365743354Z" level=info msg="TearDown network for sandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" successfully" Jul 2 07:58:30.376992 env[1227]: time="2024-07-02T07:58:30.376930372Z" level=info msg="RemovePodSandbox \"3e32dd561605d2b7c8b8e3e34a7844fa96e95cb06d74a4a97e979af0a07b9772\" returns successfully" Jul 2 07:58:30.377769 env[1227]: time="2024-07-02T07:58:30.377728565Z" level=info msg="StopPodSandbox for \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\"" Jul 2 07:58:30.377941 env[1227]: time="2024-07-02T07:58:30.377844916Z" level=info msg="TearDown network for sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" successfully" Jul 2 07:58:30.377941 env[1227]: time="2024-07-02T07:58:30.377928772Z" level=info msg="StopPodSandbox for \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" returns successfully" Jul 2 07:58:30.378448 env[1227]: time="2024-07-02T07:58:30.378400431Z" level=info msg="RemovePodSandbox for \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\"" Jul 2 07:58:30.378585 env[1227]: time="2024-07-02T07:58:30.378435864Z" level=info msg="Forcibly stopping sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\"" Jul 2 07:58:30.378585 env[1227]: time="2024-07-02T07:58:30.378537213Z" level=info msg="TearDown network for sandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" successfully" Jul 2 07:58:30.383526 env[1227]: time="2024-07-02T07:58:30.383462215Z" level=info msg="RemovePodSandbox \"7d284363bb85dfeac5326d5a4df2ddda5f42d117670105331d99e3bf99a42ab0\" returns successfully" Jul 2 07:58:30.384213 env[1227]: time="2024-07-02T07:58:30.384173438Z" level=info msg="StopPodSandbox for \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\"" Jul 2 07:58:30.384341 env[1227]: time="2024-07-02T07:58:30.384292887Z" level=info msg="TearDown network for sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" successfully" Jul 2 07:58:30.384407 env[1227]: time="2024-07-02T07:58:30.384343923Z" level=info msg="StopPodSandbox for \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" returns successfully" Jul 2 07:58:30.385065 env[1227]: time="2024-07-02T07:58:30.385019958Z" level=info msg="RemovePodSandbox for \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\"" Jul 2 07:58:30.385309 env[1227]: time="2024-07-02T07:58:30.385245923Z" level=info msg="Forcibly stopping sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\"" Jul 2 07:58:30.385438 env[1227]: time="2024-07-02T07:58:30.385408550Z" level=info msg="TearDown network for sandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" successfully" Jul 2 07:58:30.389963 env[1227]: time="2024-07-02T07:58:30.389898959Z" level=info msg="RemovePodSandbox \"410fa4248979d0484b57f8f504b4f45e987d4f2ed4e5a79fe9bf21bee4893f71\" returns successfully" Jul 2 07:58:30.420650 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2885558494.mount: Deactivated successfully. Jul 2 07:58:30.478002 kubelet[2081]: E0702 07:58:30.477909 2081 kubelet.go:2900] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jul 2 07:58:30.841460 env[1227]: time="2024-07-02T07:58:30.841402946Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Jul 2 07:58:30.874415 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4110749085.mount: Deactivated successfully. Jul 2 07:58:30.877121 env[1227]: time="2024-07-02T07:58:30.876289177Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7\"" Jul 2 07:58:30.878965 env[1227]: time="2024-07-02T07:58:30.878921056Z" level=info msg="StartContainer for \"04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7\"" Jul 2 07:58:30.916697 systemd[1]: Started cri-containerd-04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7.scope. Jul 2 07:58:30.977563 env[1227]: time="2024-07-02T07:58:30.977507939Z" level=info msg="StartContainer for \"04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7\" returns successfully" Jul 2 07:58:30.979924 systemd[1]: cri-containerd-04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7.scope: Deactivated successfully. Jul 2 07:58:31.013513 env[1227]: time="2024-07-02T07:58:31.013445407Z" level=info msg="shim disconnected" id=04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7 Jul 2 07:58:31.013513 env[1227]: time="2024-07-02T07:58:31.013513576Z" level=warning msg="cleaning up after shim disconnected" id=04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7 namespace=k8s.io Jul 2 07:58:31.013931 env[1227]: time="2024-07-02T07:58:31.013529407Z" level=info msg="cleaning up dead shim" Jul 2 07:58:31.025249 env[1227]: time="2024-07-02T07:58:31.025193150Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4200 runtime=io.containerd.runc.v2\n" Jul 2 07:58:31.420801 systemd[1]: run-containerd-runc-k8s.io-04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7-runc.WzXX1o.mount: Deactivated successfully. Jul 2 07:58:31.420976 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7-rootfs.mount: Deactivated successfully. Jul 2 07:58:31.848334 env[1227]: time="2024-07-02T07:58:31.848194886Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Jul 2 07:58:31.872571 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3547618715.mount: Deactivated successfully. Jul 2 07:58:31.880736 env[1227]: time="2024-07-02T07:58:31.880664567Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d\"" Jul 2 07:58:31.883761 env[1227]: time="2024-07-02T07:58:31.882477378Z" level=info msg="StartContainer for \"e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d\"" Jul 2 07:58:31.915334 systemd[1]: Started cri-containerd-e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d.scope. Jul 2 07:58:31.955122 systemd[1]: cri-containerd-e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d.scope: Deactivated successfully. Jul 2 07:58:31.958475 env[1227]: time="2024-07-02T07:58:31.956909846Z" level=info msg="StartContainer for \"e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d\" returns successfully" Jul 2 07:58:31.987546 env[1227]: time="2024-07-02T07:58:31.987470102Z" level=info msg="shim disconnected" id=e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d Jul 2 07:58:31.987546 env[1227]: time="2024-07-02T07:58:31.987537767Z" level=warning msg="cleaning up after shim disconnected" id=e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d namespace=k8s.io Jul 2 07:58:31.987546 env[1227]: time="2024-07-02T07:58:31.987552665Z" level=info msg="cleaning up dead shim" Jul 2 07:58:31.999896 env[1227]: time="2024-07-02T07:58:31.999811915Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:58:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4257 runtime=io.containerd.runc.v2\n" Jul 2 07:58:32.329115 kubelet[2081]: E0702 07:58:32.328847 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-5lz68" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" Jul 2 07:58:32.420869 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d-rootfs.mount: Deactivated successfully. Jul 2 07:58:32.852368 env[1227]: time="2024-07-02T07:58:32.852310363Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Jul 2 07:58:32.881088 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3311527545.mount: Deactivated successfully. Jul 2 07:58:32.890188 env[1227]: time="2024-07-02T07:58:32.890048116Z" level=info msg="CreateContainer within sandbox \"af30c94d02248e4d3619ffdfc6331f6893b4906776229aaf1e7899cbebae3da9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a\"" Jul 2 07:58:32.892151 env[1227]: time="2024-07-02T07:58:32.890852995Z" level=info msg="StartContainer for \"60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a\"" Jul 2 07:58:32.920281 systemd[1]: Started cri-containerd-60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a.scope. Jul 2 07:58:32.972391 env[1227]: time="2024-07-02T07:58:32.972323418Z" level=info msg="StartContainer for \"60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a\" returns successfully" Jul 2 07:58:32.993931 kubelet[2081]: W0702 07:58:32.991397 2081 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod046c5b09_70ee_4f34_8abe_f1b1f715dccd.slice/cri-containerd-2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b.scope WatchSource:0}: task 2e6efdf23adb17a7301fa7b7f1c51f09583c415cf2d4e135a4fe4f689efc188b not found: not found Jul 2 07:58:33.049921 kubelet[2081]: I0702 07:58:33.048242 2081 setters.go:580] "Node became not ready" node="ci-3510-3-5-9fedc1384c34e4486064.c.flatcar-212911.internal" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-07-02T07:58:33Z","lastTransitionTime":"2024-07-02T07:58:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Jul 2 07:58:33.465975 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Jul 2 07:58:34.328874 kubelet[2081]: E0702 07:58:34.328807 2081 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="kube-system/coredns-7db6d8ff4d-5lz68" podUID="7975776e-79b6-4a0e-9f03-52ab481dc130" Jul 2 07:58:36.130230 kubelet[2081]: W0702 07:58:36.130165 2081 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod046c5b09_70ee_4f34_8abe_f1b1f715dccd.slice/cri-containerd-2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07.scope WatchSource:0}: task 2dde2e606578de0fbb4f14351d3881236e543721652d61239d98a9b2f3005b07 not found: not found Jul 2 07:58:36.710819 systemd-networkd[1029]: lxc_health: Link UP Jul 2 07:58:36.726927 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Jul 2 07:58:36.729532 systemd-networkd[1029]: lxc_health: Gained carrier Jul 2 07:58:37.267559 kubelet[2081]: I0702 07:58:37.267481 2081 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/cilium-lhn88" podStartSLOduration=9.267432834 podStartE2EDuration="9.267432834s" podCreationTimestamp="2024-07-02 07:58:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:58:33.873776631 +0000 UTC m=+123.689151269" watchObservedRunningTime="2024-07-02 07:58:37.267432834 +0000 UTC m=+127.082807471" Jul 2 07:58:37.859093 systemd-networkd[1029]: lxc_health: Gained IPv6LL Jul 2 07:58:38.157492 systemd[1]: run-containerd-runc-k8s.io-60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a-runc.r5CfPm.mount: Deactivated successfully. Jul 2 07:58:39.247509 kubelet[2081]: W0702 07:58:39.247428 2081 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod046c5b09_70ee_4f34_8abe_f1b1f715dccd.slice/cri-containerd-04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7.scope WatchSource:0}: task 04bdfa065ea9f07b53f2ef042944801c3c976ecd63110228c8c72ceb022c7de7 not found: not found Jul 2 07:58:40.453116 systemd[1]: run-containerd-runc-k8s.io-60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a-runc.sCbUaC.mount: Deactivated successfully. Jul 2 07:58:42.367729 kubelet[2081]: W0702 07:58:42.367651 2081 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod046c5b09_70ee_4f34_8abe_f1b1f715dccd.slice/cri-containerd-e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d.scope WatchSource:0}: task e4481b91720b797f46e4fbb137e8f1231de2b2b4ad2c970a93415089455fd74d not found: not found Jul 2 07:58:42.697082 systemd[1]: run-containerd-runc-k8s.io-60149719c065f6160d2c791b55d9b975423af2ba21568501a26d67575eb8028a-runc.sWLUig.mount: Deactivated successfully. Jul 2 07:58:42.910040 sshd[3936]: pam_unix(sshd:session): session closed for user core Jul 2 07:58:42.915770 systemd[1]: sshd@24-10.128.0.47:22-147.75.109.163:37014.service: Deactivated successfully. Jul 2 07:58:42.916716 systemd[1]: session-25.scope: Deactivated successfully. Jul 2 07:58:42.917274 systemd-logind[1236]: Session 25 logged out. Waiting for processes to exit. Jul 2 07:58:42.919170 systemd-logind[1236]: Removed session 25.