May 17 00:53:03.018242 kernel: Linux version 5.15.182-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri May 16 23:09:52 -00 2025 May 17 00:53:03.018273 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:53:03.018288 kernel: BIOS-provided physical RAM map: May 17 00:53:03.018298 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 17 00:53:03.018308 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 17 00:53:03.018324 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 17 00:53:03.018339 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 17 00:53:03.018351 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 17 00:53:03.018361 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 17 00:53:03.018372 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 17 00:53:03.018383 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 17 00:53:03.018393 kernel: printk: bootconsole [earlyser0] enabled May 17 00:53:03.018404 kernel: NX (Execute Disable) protection: active May 17 00:53:03.018415 kernel: efi: EFI v2.70 by Microsoft May 17 00:53:03.018431 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 May 17 00:53:03.018444 kernel: random: crng init done May 17 00:53:03.018455 kernel: SMBIOS 3.1.0 present. May 17 00:53:03.018467 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 17 00:53:03.018514 kernel: Hypervisor detected: Microsoft Hyper-V May 17 00:53:03.018526 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 17 00:53:03.018538 kernel: Hyper-V Host Build:20348-10.0-1-0.1827 May 17 00:53:03.018549 kernel: Hyper-V: Nested features: 0x1e0101 May 17 00:53:03.018563 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 17 00:53:03.018575 kernel: Hyper-V: Using hypercall for remote TLB flush May 17 00:53:03.018586 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 17 00:53:03.018598 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 17 00:53:03.018610 kernel: tsc: Detected 2593.907 MHz processor May 17 00:53:03.018623 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 17 00:53:03.018635 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 17 00:53:03.018646 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 17 00:53:03.018658 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 17 00:53:03.018670 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 17 00:53:03.018684 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 17 00:53:03.018696 kernel: Using GB pages for direct mapping May 17 00:53:03.018708 kernel: Secure boot disabled May 17 00:53:03.018720 kernel: ACPI: Early table checksum verification disabled May 17 00:53:03.018732 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 17 00:53:03.018743 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018755 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018768 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 17 00:53:03.018788 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 17 00:53:03.018800 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018813 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018826 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018839 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018852 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018867 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018880 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 17 00:53:03.018893 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 17 00:53:03.018906 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 17 00:53:03.018919 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 17 00:53:03.018931 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 17 00:53:03.018944 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 17 00:53:03.018957 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 17 00:53:03.018972 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 17 00:53:03.018985 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 17 00:53:03.018997 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 17 00:53:03.019010 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 17 00:53:03.019023 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 17 00:53:03.019036 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 17 00:53:03.019048 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 17 00:53:03.019061 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 17 00:53:03.019074 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 17 00:53:03.019089 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 17 00:53:03.019102 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 17 00:53:03.019115 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 17 00:53:03.019127 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 17 00:53:03.019140 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 17 00:53:03.019152 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 17 00:53:03.019164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 17 00:53:03.019177 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 17 00:53:03.019189 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 17 00:53:03.019203 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 17 00:53:03.019215 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 17 00:53:03.019227 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 17 00:53:03.019238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 17 00:53:03.019252 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 17 00:53:03.019266 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 17 00:53:03.019282 kernel: Zone ranges: May 17 00:53:03.019296 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 17 00:53:03.019306 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 17 00:53:03.019322 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:53:03.019334 kernel: Movable zone start for each node May 17 00:53:03.019347 kernel: Early memory node ranges May 17 00:53:03.019359 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 17 00:53:03.019372 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 17 00:53:03.019384 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 17 00:53:03.019396 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 17 00:53:03.019409 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 17 00:53:03.019421 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 17 00:53:03.019436 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 17 00:53:03.019448 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 17 00:53:03.019460 kernel: ACPI: PM-Timer IO Port: 0x408 May 17 00:53:03.019505 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 17 00:53:03.019518 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 17 00:53:03.019530 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 17 00:53:03.019543 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 17 00:53:03.019555 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 17 00:53:03.019567 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 17 00:53:03.019583 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 17 00:53:03.019595 kernel: Booting paravirtualized kernel on Hyper-V May 17 00:53:03.019608 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 17 00:53:03.019620 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 May 17 00:53:03.019633 kernel: percpu: Embedded 56 pages/cpu s188696 r8192 d32488 u1048576 May 17 00:53:03.019645 kernel: pcpu-alloc: s188696 r8192 d32488 u1048576 alloc=1*2097152 May 17 00:53:03.019657 kernel: pcpu-alloc: [0] 0 1 May 17 00:53:03.019668 kernel: Hyper-V: PV spinlocks enabled May 17 00:53:03.019681 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 17 00:53:03.019696 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 17 00:53:03.019709 kernel: Policy zone: Normal May 17 00:53:03.019723 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:53:03.019735 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 17 00:53:03.019747 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 17 00:53:03.019760 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 17 00:53:03.019772 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 17 00:53:03.019785 kernel: Memory: 8079144K/8387460K available (12294K kernel code, 2276K rwdata, 13724K rodata, 47472K init, 4108K bss, 308056K reserved, 0K cma-reserved) May 17 00:53:03.019800 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 17 00:53:03.019813 kernel: ftrace: allocating 34585 entries in 136 pages May 17 00:53:03.019834 kernel: ftrace: allocated 136 pages with 2 groups May 17 00:53:03.019849 kernel: rcu: Hierarchical RCU implementation. May 17 00:53:03.019863 kernel: rcu: RCU event tracing is enabled. May 17 00:53:03.019876 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 17 00:53:03.019889 kernel: Rude variant of Tasks RCU enabled. May 17 00:53:03.019902 kernel: Tracing variant of Tasks RCU enabled. May 17 00:53:03.019915 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 17 00:53:03.019928 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 17 00:53:03.019941 kernel: Using NULL legacy PIC May 17 00:53:03.019957 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 17 00:53:03.019970 kernel: Console: colour dummy device 80x25 May 17 00:53:03.019983 kernel: printk: console [tty1] enabled May 17 00:53:03.019996 kernel: printk: console [ttyS0] enabled May 17 00:53:03.020009 kernel: printk: bootconsole [earlyser0] disabled May 17 00:53:03.020024 kernel: ACPI: Core revision 20210730 May 17 00:53:03.020038 kernel: Failed to register legacy timer interrupt May 17 00:53:03.020051 kernel: APIC: Switch to symmetric I/O mode setup May 17 00:53:03.020064 kernel: Hyper-V: Using IPI hypercalls May 17 00:53:03.020077 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593907) May 17 00:53:03.020090 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 17 00:53:03.020103 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 17 00:53:03.020117 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 17 00:53:03.020130 kernel: Spectre V2 : Mitigation: Retpolines May 17 00:53:03.020143 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 17 00:53:03.020158 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 17 00:53:03.020171 kernel: RETBleed: Vulnerable May 17 00:53:03.020184 kernel: Speculative Store Bypass: Vulnerable May 17 00:53:03.020196 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:53:03.020209 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 17 00:53:03.020222 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 17 00:53:03.020235 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 17 00:53:03.020248 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 17 00:53:03.020261 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 17 00:53:03.020274 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 17 00:53:03.020289 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 17 00:53:03.020302 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 17 00:53:03.020315 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 17 00:53:03.020328 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 17 00:53:03.020341 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 17 00:53:03.020354 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 17 00:53:03.020367 kernel: Freeing SMP alternatives memory: 32K May 17 00:53:03.020379 kernel: pid_max: default: 32768 minimum: 301 May 17 00:53:03.020392 kernel: LSM: Security Framework initializing May 17 00:53:03.020405 kernel: SELinux: Initializing. May 17 00:53:03.020418 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:53:03.020431 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 17 00:53:03.020446 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 17 00:53:03.020459 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 17 00:53:03.020480 kernel: signal: max sigframe size: 3632 May 17 00:53:03.020493 kernel: rcu: Hierarchical SRCU implementation. May 17 00:53:03.020506 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 17 00:53:03.020519 kernel: smp: Bringing up secondary CPUs ... May 17 00:53:03.020532 kernel: x86: Booting SMP configuration: May 17 00:53:03.020545 kernel: .... node #0, CPUs: #1 May 17 00:53:03.020559 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 17 00:53:03.020576 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 17 00:53:03.020589 kernel: smp: Brought up 1 node, 2 CPUs May 17 00:53:03.020602 kernel: smpboot: Max logical packages: 1 May 17 00:53:03.020615 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 17 00:53:03.020627 kernel: devtmpfs: initialized May 17 00:53:03.020640 kernel: x86/mm: Memory block size: 128MB May 17 00:53:03.020653 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 17 00:53:03.020667 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 17 00:53:03.020680 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 17 00:53:03.020695 kernel: pinctrl core: initialized pinctrl subsystem May 17 00:53:03.020708 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 17 00:53:03.020721 kernel: audit: initializing netlink subsys (disabled) May 17 00:53:03.020734 kernel: audit: type=2000 audit(1747443182.023:1): state=initialized audit_enabled=0 res=1 May 17 00:53:03.020747 kernel: thermal_sys: Registered thermal governor 'step_wise' May 17 00:53:03.020760 kernel: thermal_sys: Registered thermal governor 'user_space' May 17 00:53:03.020773 kernel: cpuidle: using governor menu May 17 00:53:03.020785 kernel: ACPI: bus type PCI registered May 17 00:53:03.020798 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 17 00:53:03.020814 kernel: dca service started, version 1.12.1 May 17 00:53:03.020827 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 17 00:53:03.020840 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages May 17 00:53:03.020853 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages May 17 00:53:03.020866 kernel: ACPI: Added _OSI(Module Device) May 17 00:53:03.020880 kernel: ACPI: Added _OSI(Processor Device) May 17 00:53:03.020892 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 17 00:53:03.020905 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 17 00:53:03.020918 kernel: ACPI: Added _OSI(Linux-Dell-Video) May 17 00:53:03.020934 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) May 17 00:53:03.020947 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) May 17 00:53:03.020960 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 17 00:53:03.020973 kernel: ACPI: Interpreter enabled May 17 00:53:03.020985 kernel: ACPI: PM: (supports S0 S5) May 17 00:53:03.020998 kernel: ACPI: Using IOAPIC for interrupt routing May 17 00:53:03.021012 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 17 00:53:03.021025 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 17 00:53:03.021038 kernel: iommu: Default domain type: Translated May 17 00:53:03.021053 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 17 00:53:03.021066 kernel: vgaarb: loaded May 17 00:53:03.021079 kernel: pps_core: LinuxPPS API ver. 1 registered May 17 00:53:03.021092 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 17 00:53:03.021105 kernel: PTP clock support registered May 17 00:53:03.021118 kernel: Registered efivars operations May 17 00:53:03.021131 kernel: PCI: Using ACPI for IRQ routing May 17 00:53:03.021144 kernel: PCI: System does not support PCI May 17 00:53:03.021156 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 17 00:53:03.021172 kernel: VFS: Disk quotas dquot_6.6.0 May 17 00:53:03.021185 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 17 00:53:03.021198 kernel: pnp: PnP ACPI init May 17 00:53:03.021211 kernel: pnp: PnP ACPI: found 3 devices May 17 00:53:03.021224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 17 00:53:03.021237 kernel: NET: Registered PF_INET protocol family May 17 00:53:03.021250 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 17 00:53:03.021263 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 17 00:53:03.021276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 17 00:53:03.021292 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 17 00:53:03.021305 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) May 17 00:53:03.021318 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 17 00:53:03.021331 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:53:03.021344 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 17 00:53:03.021357 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 17 00:53:03.021370 kernel: NET: Registered PF_XDP protocol family May 17 00:53:03.021383 kernel: PCI: CLS 0 bytes, default 64 May 17 00:53:03.021396 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 17 00:53:03.021411 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) May 17 00:53:03.021424 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 17 00:53:03.021438 kernel: Initialise system trusted keyrings May 17 00:53:03.021450 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 17 00:53:03.021463 kernel: Key type asymmetric registered May 17 00:53:03.021483 kernel: Asymmetric key parser 'x509' registered May 17 00:53:03.021496 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) May 17 00:53:03.021509 kernel: io scheduler mq-deadline registered May 17 00:53:03.021522 kernel: io scheduler kyber registered May 17 00:53:03.021537 kernel: io scheduler bfq registered May 17 00:53:03.021550 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 17 00:53:03.021563 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 17 00:53:03.021576 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 17 00:53:03.021589 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 17 00:53:03.021603 kernel: i8042: PNP: No PS/2 controller found. May 17 00:53:03.021759 kernel: rtc_cmos 00:02: registered as rtc0 May 17 00:53:03.021867 kernel: rtc_cmos 00:02: setting system clock to 2025-05-17T00:53:02 UTC (1747443182) May 17 00:53:03.021968 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 17 00:53:03.021983 kernel: intel_pstate: CPU model not supported May 17 00:53:03.021996 kernel: efifb: probing for efifb May 17 00:53:03.022008 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 17 00:53:03.022021 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 17 00:53:03.022033 kernel: efifb: scrolling: redraw May 17 00:53:03.022046 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 17 00:53:03.022058 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:53:03.022073 kernel: fb0: EFI VGA frame buffer device May 17 00:53:03.022086 kernel: pstore: Registered efi as persistent store backend May 17 00:53:03.022098 kernel: NET: Registered PF_INET6 protocol family May 17 00:53:03.022111 kernel: Segment Routing with IPv6 May 17 00:53:03.022123 kernel: In-situ OAM (IOAM) with IPv6 May 17 00:53:03.022135 kernel: NET: Registered PF_PACKET protocol family May 17 00:53:03.022147 kernel: Key type dns_resolver registered May 17 00:53:03.022159 kernel: IPI shorthand broadcast: enabled May 17 00:53:03.022172 kernel: sched_clock: Marking stable (804587500, 20303600)->(991271000, -166379900) May 17 00:53:03.022184 kernel: registered taskstats version 1 May 17 00:53:03.022199 kernel: Loading compiled-in X.509 certificates May 17 00:53:03.022211 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.182-flatcar: 01ca23caa8e5879327538f9287e5164b3e97ac0c' May 17 00:53:03.022223 kernel: Key type .fscrypt registered May 17 00:53:03.022235 kernel: Key type fscrypt-provisioning registered May 17 00:53:03.022247 kernel: pstore: Using crash dump compression: deflate May 17 00:53:03.022260 kernel: ima: No TPM chip found, activating TPM-bypass! May 17 00:53:03.022272 kernel: ima: Allocated hash algorithm: sha1 May 17 00:53:03.022284 kernel: ima: No architecture policies found May 17 00:53:03.022299 kernel: clk: Disabling unused clocks May 17 00:53:03.022311 kernel: Freeing unused kernel image (initmem) memory: 47472K May 17 00:53:03.022323 kernel: Write protecting the kernel read-only data: 28672k May 17 00:53:03.022336 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K May 17 00:53:03.022348 kernel: Freeing unused kernel image (rodata/data gap) memory: 612K May 17 00:53:03.022361 kernel: Run /init as init process May 17 00:53:03.022373 kernel: with arguments: May 17 00:53:03.022385 kernel: /init May 17 00:53:03.022397 kernel: with environment: May 17 00:53:03.022411 kernel: HOME=/ May 17 00:53:03.022423 kernel: TERM=linux May 17 00:53:03.022435 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 17 00:53:03.022450 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:53:03.022465 systemd[1]: Detected virtualization microsoft. May 17 00:53:03.042518 systemd[1]: Detected architecture x86-64. May 17 00:53:03.042536 systemd[1]: Running in initrd. May 17 00:53:03.042549 systemd[1]: No hostname configured, using default hostname. May 17 00:53:03.042566 systemd[1]: Hostname set to . May 17 00:53:03.042580 systemd[1]: Initializing machine ID from random generator. May 17 00:53:03.042594 systemd[1]: Queued start job for default target initrd.target. May 17 00:53:03.042606 systemd[1]: Started systemd-ask-password-console.path. May 17 00:53:03.042619 systemd[1]: Reached target cryptsetup.target. May 17 00:53:03.042632 systemd[1]: Reached target paths.target. May 17 00:53:03.042645 systemd[1]: Reached target slices.target. May 17 00:53:03.042658 systemd[1]: Reached target swap.target. May 17 00:53:03.042673 systemd[1]: Reached target timers.target. May 17 00:53:03.042687 systemd[1]: Listening on iscsid.socket. May 17 00:53:03.042700 systemd[1]: Listening on iscsiuio.socket. May 17 00:53:03.042713 systemd[1]: Listening on systemd-journald-audit.socket. May 17 00:53:03.042726 systemd[1]: Listening on systemd-journald-dev-log.socket. May 17 00:53:03.042739 systemd[1]: Listening on systemd-journald.socket. May 17 00:53:03.042753 systemd[1]: Listening on systemd-networkd.socket. May 17 00:53:03.042766 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:53:03.042781 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:53:03.042794 systemd[1]: Reached target sockets.target. May 17 00:53:03.042807 systemd[1]: Starting kmod-static-nodes.service... May 17 00:53:03.042820 systemd[1]: Finished network-cleanup.service. May 17 00:53:03.042834 systemd[1]: Starting systemd-fsck-usr.service... May 17 00:53:03.042847 systemd[1]: Starting systemd-journald.service... May 17 00:53:03.042859 systemd[1]: Starting systemd-modules-load.service... May 17 00:53:03.042873 systemd[1]: Starting systemd-resolved.service... May 17 00:53:03.042886 systemd[1]: Starting systemd-vconsole-setup.service... May 17 00:53:03.042901 systemd[1]: Finished kmod-static-nodes.service. May 17 00:53:03.042914 kernel: audit: type=1130 audit(1747443183.024:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.042928 systemd[1]: Finished systemd-fsck-usr.service. May 17 00:53:03.042945 systemd-journald[183]: Journal started May 17 00:53:03.043013 systemd-journald[183]: Runtime Journal (/run/log/journal/37fa4a16f1a14c688aa66af062a2760a) is 8.0M, max 159.0M, 151.0M free. May 17 00:53:03.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.039629 systemd-modules-load[184]: Inserted module 'overlay' May 17 00:53:03.054363 systemd-resolved[185]: Positive Trust Anchors: May 17 00:53:03.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.055548 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:53:03.073154 kernel: audit: type=1130 audit(1747443183.055:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.073178 systemd[1]: Started systemd-journald.service. May 17 00:53:03.055587 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:53:03.068643 systemd-resolved[185]: Defaulting to hostname 'linux'. May 17 00:53:03.090845 systemd[1]: Started systemd-resolved.service. May 17 00:53:03.095237 systemd[1]: Finished systemd-vconsole-setup.service. May 17 00:53:03.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.108488 kernel: audit: type=1130 audit(1747443183.090:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.108518 kernel: audit: type=1130 audit(1747443183.094:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.123016 systemd[1]: Reached target nss-lookup.target. May 17 00:53:03.145689 kernel: audit: type=1130 audit(1747443183.122:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.145715 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 17 00:53:03.144836 systemd[1]: Starting dracut-cmdline-ask.service... May 17 00:53:03.150507 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:53:03.156427 kernel: Bridge firewalling registered May 17 00:53:03.158130 systemd-modules-load[184]: Inserted module 'br_netfilter' May 17 00:53:03.163824 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:53:03.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.180491 kernel: audit: type=1130 audit(1747443183.163:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.185150 systemd[1]: Finished dracut-cmdline-ask.service. May 17 00:53:03.189007 systemd[1]: Starting dracut-cmdline.service... May 17 00:53:03.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.212759 kernel: audit: type=1130 audit(1747443183.187:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.213245 dracut-cmdline[201]: dracut-dracut-053 May 17 00:53:03.217649 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4aad7caeadb0359f379975532748a0b4ae6bb9b229507353e0f5ae84cb9335a0 May 17 00:53:03.233164 kernel: SCSI subsystem initialized May 17 00:53:03.258678 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 17 00:53:03.258715 kernel: device-mapper: uevent: version 1.0.3 May 17 00:53:03.263672 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com May 17 00:53:03.267769 systemd-modules-load[184]: Inserted module 'dm_multipath' May 17 00:53:03.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.271002 systemd[1]: Finished systemd-modules-load.service. May 17 00:53:03.291538 kernel: audit: type=1130 audit(1747443183.272:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.291563 kernel: Loading iSCSI transport class v2.0-870. May 17 00:53:03.273802 systemd[1]: Starting systemd-sysctl.service... May 17 00:53:03.295778 systemd[1]: Finished systemd-sysctl.service. May 17 00:53:03.297000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.310489 kernel: audit: type=1130 audit(1747443183.297:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.326489 kernel: iscsi: registered transport (tcp) May 17 00:53:03.353924 kernel: iscsi: registered transport (qla4xxx) May 17 00:53:03.353971 kernel: QLogic iSCSI HBA Driver May 17 00:53:03.382266 systemd[1]: Finished dracut-cmdline.service. May 17 00:53:03.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.385448 systemd[1]: Starting dracut-pre-udev.service... May 17 00:53:03.437495 kernel: raid6: avx512x4 gen() 18389 MB/s May 17 00:53:03.457489 kernel: raid6: avx512x4 xor() 7647 MB/s May 17 00:53:03.477484 kernel: raid6: avx512x2 gen() 18353 MB/s May 17 00:53:03.497489 kernel: raid6: avx512x2 xor() 29754 MB/s May 17 00:53:03.517484 kernel: raid6: avx512x1 gen() 18386 MB/s May 17 00:53:03.537484 kernel: raid6: avx512x1 xor() 26837 MB/s May 17 00:53:03.557486 kernel: raid6: avx2x4 gen() 18303 MB/s May 17 00:53:03.577485 kernel: raid6: avx2x4 xor() 7924 MB/s May 17 00:53:03.596483 kernel: raid6: avx2x2 gen() 18309 MB/s May 17 00:53:03.616488 kernel: raid6: avx2x2 xor() 22397 MB/s May 17 00:53:03.635484 kernel: raid6: avx2x1 gen() 13765 MB/s May 17 00:53:03.654483 kernel: raid6: avx2x1 xor() 19505 MB/s May 17 00:53:03.674486 kernel: raid6: sse2x4 gen() 11755 MB/s May 17 00:53:03.694483 kernel: raid6: sse2x4 xor() 7405 MB/s May 17 00:53:03.714483 kernel: raid6: sse2x2 gen() 13013 MB/s May 17 00:53:03.734485 kernel: raid6: sse2x2 xor() 7707 MB/s May 17 00:53:03.753483 kernel: raid6: sse2x1 gen() 11692 MB/s May 17 00:53:03.778879 kernel: raid6: sse2x1 xor() 5931 MB/s May 17 00:53:03.778929 kernel: raid6: using algorithm avx512x4 gen() 18389 MB/s May 17 00:53:03.778940 kernel: raid6: .... xor() 7647 MB/s, rmw enabled May 17 00:53:03.786555 kernel: raid6: using avx512x2 recovery algorithm May 17 00:53:03.803499 kernel: xor: automatically using best checksumming function avx May 17 00:53:03.899496 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no May 17 00:53:03.907283 systemd[1]: Finished dracut-pre-udev.service. May 17 00:53:03.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.911000 audit: BPF prog-id=7 op=LOAD May 17 00:53:03.911000 audit: BPF prog-id=8 op=LOAD May 17 00:53:03.911768 systemd[1]: Starting systemd-udevd.service... May 17 00:53:03.926468 systemd-udevd[384]: Using default interface naming scheme 'v252'. May 17 00:53:03.932946 systemd[1]: Started systemd-udevd.service. May 17 00:53:03.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.937487 systemd[1]: Starting dracut-pre-trigger.service... May 17 00:53:03.957174 dracut-pre-trigger[390]: rd.md=0: removing MD RAID activation May 17 00:53:03.986683 systemd[1]: Finished dracut-pre-trigger.service. May 17 00:53:03.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:03.992271 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:53:04.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:04.026494 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:53:04.074490 kernel: cryptd: max_cpu_qlen set to 1000 May 17 00:53:04.087488 kernel: hv_vmbus: Vmbus version:5.2 May 17 00:53:04.099488 kernel: hv_vmbus: registering driver hyperv_keyboard May 17 00:53:04.124487 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 17 00:53:04.135489 kernel: hid: raw HID events driver (C) Jiri Kosina May 17 00:53:04.149863 kernel: hv_vmbus: registering driver hv_netvsc May 17 00:53:04.149907 kernel: hv_vmbus: registering driver hv_storvsc May 17 00:53:04.154489 kernel: AVX2 version of gcm_enc/dec engaged. May 17 00:53:04.162386 kernel: scsi host1: storvsc_host_t May 17 00:53:04.162458 kernel: scsi host0: storvsc_host_t May 17 00:53:04.171533 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 17 00:53:04.171607 kernel: hv_vmbus: registering driver hid_hyperv May 17 00:53:04.174833 kernel: AES CTR mode by8 optimization enabled May 17 00:53:04.174863 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 17 00:53:04.183888 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 17 00:53:04.183935 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 17 00:53:04.218314 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 17 00:53:04.230358 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 17 00:53:04.230381 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 17 00:53:04.245571 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 17 00:53:04.245743 kernel: sd 0:0:0:0: [sda] Write Protect is off May 17 00:53:04.245898 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 17 00:53:04.246052 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 17 00:53:04.246202 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 17 00:53:04.246355 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:53:04.246374 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 17 00:53:04.345904 kernel: hv_netvsc 7c1e521e-4cae-7c1e-521e-4cae7c1e521e eth0: VF slot 1 added May 17 00:53:04.354942 kernel: hv_vmbus: registering driver hv_pci May 17 00:53:04.362052 kernel: hv_pci ea506266-01a4-4fe6-810d-78b0a920575d: PCI VMBus probing: Using version 0x10004 May 17 00:53:04.438810 kernel: hv_pci ea506266-01a4-4fe6-810d-78b0a920575d: PCI host bridge to bus 01a4:00 May 17 00:53:04.438985 kernel: pci_bus 01a4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 17 00:53:04.439156 kernel: pci_bus 01a4:00: No busn resource found for root bus, will use [bus 00-ff] May 17 00:53:04.439304 kernel: pci 01a4:00:02.0: [15b3:1016] type 00 class 0x020000 May 17 00:53:04.439514 kernel: pci 01a4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:53:04.439681 kernel: pci 01a4:00:02.0: enabling Extended Tags May 17 00:53:04.439835 kernel: pci 01a4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 01a4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 17 00:53:04.439989 kernel: pci_bus 01a4:00: busn_res: [bus 00-ff] end is updated to 00 May 17 00:53:04.440138 kernel: pci 01a4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 17 00:53:04.531494 kernel: mlx5_core 01a4:00:02.0: firmware version: 14.30.5000 May 17 00:53:04.788384 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (440) May 17 00:53:04.788413 kernel: mlx5_core 01a4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) May 17 00:53:04.788613 kernel: mlx5_core 01a4:00:02.0: Supported tc offload range - chains: 1, prios: 1 May 17 00:53:04.788804 kernel: mlx5_core 01a4:00:02.0: mlx5e_tc_post_act_init:40:(pid 187): firmware level support is missing May 17 00:53:04.788964 kernel: hv_netvsc 7c1e521e-4cae-7c1e-521e-4cae7c1e521e eth0: VF registering: eth1 May 17 00:53:04.789110 kernel: mlx5_core 01a4:00:02.0 eth1: joined to eth0 May 17 00:53:04.618686 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. May 17 00:53:04.690211 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:53:04.759171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. May 17 00:53:04.766274 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. May 17 00:53:04.774080 systemd[1]: Starting disk-uuid.service... May 17 00:53:04.799194 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. May 17 00:53:04.816514 kernel: mlx5_core 01a4:00:02.0 enP420s1: renamed from eth1 May 17 00:53:05.799361 disk-uuid[553]: The operation has completed successfully. May 17 00:53:05.802092 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 17 00:53:05.868589 systemd[1]: disk-uuid.service: Deactivated successfully. May 17 00:53:05.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:05.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:05.868702 systemd[1]: Finished disk-uuid.service. May 17 00:53:05.880711 systemd[1]: Starting verity-setup.service... May 17 00:53:05.911673 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 17 00:53:06.116866 systemd[1]: Found device dev-mapper-usr.device. May 17 00:53:06.121428 systemd[1]: Finished verity-setup.service. May 17 00:53:06.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.126221 systemd[1]: Mounting sysusr-usr.mount... May 17 00:53:06.204484 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. May 17 00:53:06.204801 systemd[1]: Mounted sysusr-usr.mount. May 17 00:53:06.208650 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. May 17 00:53:06.212484 systemd[1]: Starting ignition-setup.service... May 17 00:53:06.215147 systemd[1]: Starting parse-ip-for-networkd.service... May 17 00:53:06.232218 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:53:06.232260 kernel: BTRFS info (device sda6): using free space tree May 17 00:53:06.232274 kernel: BTRFS info (device sda6): has skinny extents May 17 00:53:06.286267 systemd[1]: Finished parse-ip-for-networkd.service. May 17 00:53:06.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.291000 audit: BPF prog-id=9 op=LOAD May 17 00:53:06.292270 systemd[1]: Starting systemd-networkd.service... May 17 00:53:06.314911 systemd-networkd[824]: lo: Link UP May 17 00:53:06.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.314920 systemd-networkd[824]: lo: Gained carrier May 17 00:53:06.315778 systemd-networkd[824]: Enumeration completed May 17 00:53:06.315844 systemd[1]: Started systemd-networkd.service. May 17 00:53:06.318801 systemd[1]: Reached target network.target. May 17 00:53:06.322335 systemd[1]: Starting iscsiuio.service... May 17 00:53:06.326053 systemd-networkd[824]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:53:06.339793 systemd[1]: Started iscsiuio.service. May 17 00:53:06.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.342865 systemd[1]: Starting iscsid.service... May 17 00:53:06.347582 systemd[1]: mnt-oem.mount: Deactivated successfully. May 17 00:53:06.352535 iscsid[835]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi May 17 00:53:06.352535 iscsid[835]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. May 17 00:53:06.352535 iscsid[835]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. May 17 00:53:06.352535 iscsid[835]: If using hardware iscsi like qla4xxx this message can be ignored. May 17 00:53:06.352535 iscsid[835]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi May 17 00:53:06.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.387785 iscsid[835]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf May 17 00:53:06.354020 systemd[1]: Started iscsid.service. May 17 00:53:06.358329 systemd[1]: Starting dracut-initqueue.service... May 17 00:53:06.371917 systemd[1]: Finished dracut-initqueue.service. May 17 00:53:06.376143 systemd[1]: Reached target remote-fs-pre.target. May 17 00:53:06.380844 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:53:06.385860 systemd[1]: Reached target remote-fs.target. May 17 00:53:06.388347 systemd[1]: Starting dracut-pre-mount.service... May 17 00:53:06.406484 kernel: mlx5_core 01a4:00:02.0 enP420s1: Link up May 17 00:53:06.410313 systemd[1]: Finished dracut-pre-mount.service. May 17 00:53:06.413000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.444493 kernel: hv_netvsc 7c1e521e-4cae-7c1e-521e-4cae7c1e521e eth0: Data path switched to VF: enP420s1 May 17 00:53:06.449537 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready May 17 00:53:06.449741 systemd-networkd[824]: enP420s1: Link UP May 17 00:53:06.451870 systemd-networkd[824]: eth0: Link UP May 17 00:53:06.453849 systemd-networkd[824]: eth0: Gained carrier May 17 00:53:06.459293 systemd-networkd[824]: enP420s1: Gained carrier May 17 00:53:06.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:06.459312 systemd[1]: Finished ignition-setup.service. May 17 00:53:06.462911 systemd[1]: Starting ignition-fetch-offline.service... May 17 00:53:06.479534 systemd-networkd[824]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:53:07.799697 systemd-networkd[824]: eth0: Gained IPv6LL May 17 00:53:09.630927 ignition[851]: Ignition 2.14.0 May 17 00:53:09.630945 ignition[851]: Stage: fetch-offline May 17 00:53:09.631036 ignition[851]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:09.631088 ignition[851]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:09.751615 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:09.751796 ignition[851]: parsed url from cmdline: "" May 17 00:53:09.751800 ignition[851]: no config URL provided May 17 00:53:09.751806 ignition[851]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:53:09.787787 kernel: kauditd_printk_skb: 18 callbacks suppressed May 17 00:53:09.787811 kernel: audit: type=1130 audit(1747443189.762:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.759023 systemd[1]: Finished ignition-fetch-offline.service. May 17 00:53:09.751814 ignition[851]: no config at "/usr/lib/ignition/user.ign" May 17 00:53:09.764108 systemd[1]: Starting ignition-fetch.service... May 17 00:53:09.751820 ignition[851]: failed to fetch config: resource requires networking May 17 00:53:09.752132 ignition[851]: Ignition finished successfully May 17 00:53:09.772409 ignition[857]: Ignition 2.14.0 May 17 00:53:09.772414 ignition[857]: Stage: fetch May 17 00:53:09.772536 ignition[857]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:09.772562 ignition[857]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:09.781434 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:09.781700 ignition[857]: parsed url from cmdline: "" May 17 00:53:09.781705 ignition[857]: no config URL provided May 17 00:53:09.781713 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" May 17 00:53:09.781723 ignition[857]: no config at "/usr/lib/ignition/user.ign" May 17 00:53:09.781760 ignition[857]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 17 00:53:09.856515 ignition[857]: GET result: OK May 17 00:53:09.856675 ignition[857]: config has been read from IMDS userdata May 17 00:53:09.856715 ignition[857]: parsing config with SHA512: 0766903103091312c0c228855f1e2c2cf59b41679f93d121e921138e54d2133441c0b4e2984354975ceced0bce1e97be34d255e9058419bf1f36db712d5f61a8 May 17 00:53:09.864240 unknown[857]: fetched base config from "system" May 17 00:53:09.864252 unknown[857]: fetched base config from "system" May 17 00:53:09.864259 unknown[857]: fetched user config from "azure" May 17 00:53:09.870363 ignition[857]: fetch: fetch complete May 17 00:53:09.870373 ignition[857]: fetch: fetch passed May 17 00:53:09.870425 ignition[857]: Ignition finished successfully May 17 00:53:09.876024 systemd[1]: Finished ignition-fetch.service. May 17 00:53:09.877000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.891496 kernel: audit: type=1130 audit(1747443189.877:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.890285 systemd[1]: Starting ignition-kargs.service... May 17 00:53:09.901169 ignition[863]: Ignition 2.14.0 May 17 00:53:09.901179 ignition[863]: Stage: kargs May 17 00:53:09.901312 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:09.901345 ignition[863]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:09.905206 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:09.907826 ignition[863]: kargs: kargs passed May 17 00:53:09.910160 systemd[1]: Finished ignition-kargs.service. May 17 00:53:09.926975 kernel: audit: type=1130 audit(1747443189.913:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.913000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.907870 ignition[863]: Ignition finished successfully May 17 00:53:09.929106 systemd[1]: Starting ignition-disks.service... May 17 00:53:09.931938 ignition[869]: Ignition 2.14.0 May 17 00:53:09.934281 ignition[869]: Stage: disks May 17 00:53:09.935250 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:09.935268 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:09.938723 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:09.940770 ignition[869]: disks: disks passed May 17 00:53:09.943000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.941592 systemd[1]: Finished ignition-disks.service. May 17 00:53:09.961255 kernel: audit: type=1130 audit(1747443189.943:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:09.940813 ignition[869]: Ignition finished successfully May 17 00:53:09.954639 systemd[1]: Reached target initrd-root-device.target. May 17 00:53:09.959365 systemd[1]: Reached target local-fs-pre.target. May 17 00:53:09.961240 systemd[1]: Reached target local-fs.target. May 17 00:53:09.963058 systemd[1]: Reached target sysinit.target. May 17 00:53:09.964776 systemd[1]: Reached target basic.target. May 17 00:53:09.969021 systemd[1]: Starting systemd-fsck-root.service... May 17 00:53:10.020576 systemd-fsck[877]: ROOT: clean, 619/7326000 files, 481079/7359488 blocks May 17 00:53:10.029077 systemd[1]: Finished systemd-fsck-root.service. May 17 00:53:10.047284 kernel: audit: type=1130 audit(1747443190.031:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.044148 systemd[1]: Mounting sysroot.mount... May 17 00:53:10.066038 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. May 17 00:53:10.065007 systemd[1]: Mounted sysroot.mount. May 17 00:53:10.066856 systemd[1]: Reached target initrd-root-fs.target. May 17 00:53:10.093893 systemd[1]: Mounting sysroot-usr.mount... May 17 00:53:10.099069 systemd[1]: Starting flatcar-metadata-hostname.service... May 17 00:53:10.103978 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 17 00:53:10.104019 systemd[1]: Reached target ignition-diskful.target. May 17 00:53:10.113036 systemd[1]: Mounted sysroot-usr.mount. May 17 00:53:10.170789 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:53:10.176652 systemd[1]: Starting initrd-setup-root.service... May 17 00:53:10.193501 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (888) May 17 00:53:10.194067 initrd-setup-root[893]: cut: /sysroot/etc/passwd: No such file or directory May 17 00:53:10.204335 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:53:10.204372 kernel: BTRFS info (device sda6): using free space tree May 17 00:53:10.204391 kernel: BTRFS info (device sda6): has skinny extents May 17 00:53:10.212838 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:53:10.224414 initrd-setup-root[919]: cut: /sysroot/etc/group: No such file or directory May 17 00:53:10.251551 initrd-setup-root[927]: cut: /sysroot/etc/shadow: No such file or directory May 17 00:53:10.259040 initrd-setup-root[935]: cut: /sysroot/etc/gshadow: No such file or directory May 17 00:53:10.727872 systemd[1]: Finished initrd-setup-root.service. May 17 00:53:10.730000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.734266 systemd[1]: Starting ignition-mount.service... May 17 00:53:10.754000 kernel: audit: type=1130 audit(1747443190.730:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.750991 systemd[1]: Starting sysroot-boot.service... May 17 00:53:10.753426 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. May 17 00:53:10.753539 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. May 17 00:53:10.775657 ignition[955]: INFO : Ignition 2.14.0 May 17 00:53:10.777608 ignition[955]: INFO : Stage: mount May 17 00:53:10.777608 ignition[955]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:10.777608 ignition[955]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:10.787422 ignition[955]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:10.787422 ignition[955]: INFO : mount: mount passed May 17 00:53:10.787422 ignition[955]: INFO : Ignition finished successfully May 17 00:53:10.786873 systemd[1]: Finished ignition-mount.service. May 17 00:53:10.810503 kernel: audit: type=1130 audit(1747443190.791:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.791000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.817459 systemd[1]: Finished sysroot-boot.service. May 17 00:53:10.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:10.832514 kernel: audit: type=1130 audit(1747443190.821:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:11.518464 coreos-metadata[887]: May 17 00:53:11.518 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 17 00:53:11.535921 coreos-metadata[887]: May 17 00:53:11.535 INFO Fetch successful May 17 00:53:11.568785 coreos-metadata[887]: May 17 00:53:11.568 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 17 00:53:11.583233 coreos-metadata[887]: May 17 00:53:11.583 INFO Fetch successful May 17 00:53:11.597447 coreos-metadata[887]: May 17 00:53:11.597 INFO wrote hostname ci-3510.3.7-n-34d8c498b2 to /sysroot/etc/hostname May 17 00:53:11.603000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:11.599231 systemd[1]: Finished flatcar-metadata-hostname.service. May 17 00:53:11.619108 kernel: audit: type=1130 audit(1747443191.603:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:11.604583 systemd[1]: Starting ignition-files.service... May 17 00:53:11.622240 systemd[1]: Mounting sysroot-usr-share-oem.mount... May 17 00:53:11.637500 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (966) May 17 00:53:11.637544 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 17 00:53:11.645423 kernel: BTRFS info (device sda6): using free space tree May 17 00:53:11.645446 kernel: BTRFS info (device sda6): has skinny extents May 17 00:53:11.656352 systemd[1]: Mounted sysroot-usr-share-oem.mount. May 17 00:53:11.669902 ignition[985]: INFO : Ignition 2.14.0 May 17 00:53:11.669902 ignition[985]: INFO : Stage: files May 17 00:53:11.673758 ignition[985]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:11.673758 ignition[985]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:11.673758 ignition[985]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:11.690449 ignition[985]: DEBUG : files: compiled without relabeling support, skipping May 17 00:53:11.694126 ignition[985]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 17 00:53:11.694126 ignition[985]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 17 00:53:11.745620 ignition[985]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 17 00:53:11.751620 ignition[985]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 17 00:53:11.761633 unknown[985]: wrote ssh authorized keys file for user: core May 17 00:53:11.764107 ignition[985]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 17 00:53:11.764107 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:53:11.764107 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 17 00:53:12.051120 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 17 00:53:12.150826 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:53:12.161827 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1559252256" May 17 00:53:12.230129 ignition[985]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1559252256": device or resource busy May 17 00:53:12.230129 ignition[985]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1559252256", trying btrfs: device or resource busy May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1559252256" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1559252256" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem1559252256" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem1559252256" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem88642854" May 17 00:53:12.230129 ignition[985]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem88642854": device or resource busy May 17 00:53:12.230129 ignition[985]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem88642854", trying btrfs: device or resource busy May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem88642854" May 17 00:53:12.230129 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem88642854" May 17 00:53:12.167557 systemd[1]: mnt-oem1559252256.mount: Deactivated successfully. May 17 00:53:12.297349 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem88642854" May 17 00:53:12.297349 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem88642854" May 17 00:53:12.297349 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" May 17 00:53:12.297349 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(12): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:53:12.297349 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 May 17 00:53:12.954074 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(12): GET result: OK May 17 00:53:13.122834 ignition[985]: INFO : files: createFilesystemsFiles: createFiles: op(12): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" May 17 00:53:13.122834 ignition[985]: INFO : files: op(13): [started] processing unit "waagent.service" May 17 00:53:13.122834 ignition[985]: INFO : files: op(13): [finished] processing unit "waagent.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(14): [started] processing unit "nvidia.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(14): [finished] processing unit "nvidia.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(15): [started] processing unit "prepare-helm.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(15): op(16): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(15): op(16): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(15): [finished] processing unit "prepare-helm.service" May 17 00:53:13.134014 ignition[985]: INFO : files: op(17): [started] setting preset to enabled for "waagent.service" May 17 00:53:13.157626 ignition[985]: INFO : files: op(17): [finished] setting preset to enabled for "waagent.service" May 17 00:53:13.157626 ignition[985]: INFO : files: op(18): [started] setting preset to enabled for "nvidia.service" May 17 00:53:13.157626 ignition[985]: INFO : files: op(18): [finished] setting preset to enabled for "nvidia.service" May 17 00:53:13.157626 ignition[985]: INFO : files: op(19): [started] setting preset to enabled for "prepare-helm.service" May 17 00:53:13.157626 ignition[985]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-helm.service" May 17 00:53:13.173741 ignition[985]: INFO : files: createResultFile: createFiles: op(1a): [started] writing file "/sysroot/etc/.ignition-result.json" May 17 00:53:13.173741 ignition[985]: INFO : files: createResultFile: createFiles: op(1a): [finished] writing file "/sysroot/etc/.ignition-result.json" May 17 00:53:13.173741 ignition[985]: INFO : files: files passed May 17 00:53:13.183642 ignition[985]: INFO : Ignition finished successfully May 17 00:53:13.186767 systemd[1]: Finished ignition-files.service. May 17 00:53:13.186000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.206494 kernel: audit: type=1130 audit(1747443193.186:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.208151 systemd[1]: Starting initrd-setup-root-after-ignition.service... May 17 00:53:13.210625 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). May 17 00:53:13.218104 systemd[1]: Starting ignition-quench.service... May 17 00:53:13.223000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.223000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.221673 systemd[1]: ignition-quench.service: Deactivated successfully. May 17 00:53:13.221764 systemd[1]: Finished ignition-quench.service. May 17 00:53:13.359329 initrd-setup-root-after-ignition[1011]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 17 00:53:13.359919 systemd[1]: Finished initrd-setup-root-after-ignition.service. May 17 00:53:13.368000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.368808 systemd[1]: Reached target ignition-complete.target. May 17 00:53:13.373687 systemd[1]: Starting initrd-parse-etc.service... May 17 00:53:13.387408 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 17 00:53:13.387550 systemd[1]: Finished initrd-parse-etc.service. May 17 00:53:13.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.392027 systemd[1]: Reached target initrd-fs.target. May 17 00:53:13.395675 systemd[1]: Reached target initrd.target. May 17 00:53:13.397446 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. May 17 00:53:13.398213 systemd[1]: Starting dracut-pre-pivot.service... May 17 00:53:13.412839 systemd[1]: Finished dracut-pre-pivot.service. May 17 00:53:13.416000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.417567 systemd[1]: Starting initrd-cleanup.service... May 17 00:53:13.427241 systemd[1]: Stopped target nss-lookup.target. May 17 00:53:13.431336 systemd[1]: Stopped target remote-cryptsetup.target. May 17 00:53:13.433579 systemd[1]: Stopped target timers.target. May 17 00:53:13.437418 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 17 00:53:13.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.437563 systemd[1]: Stopped dracut-pre-pivot.service. May 17 00:53:13.440839 systemd[1]: Stopped target initrd.target. May 17 00:53:13.444809 systemd[1]: Stopped target basic.target. May 17 00:53:13.450506 systemd[1]: Stopped target ignition-complete.target. May 17 00:53:13.454342 systemd[1]: Stopped target ignition-diskful.target. May 17 00:53:13.458204 systemd[1]: Stopped target initrd-root-device.target. May 17 00:53:13.462526 systemd[1]: Stopped target remote-fs.target. May 17 00:53:13.466502 systemd[1]: Stopped target remote-fs-pre.target. May 17 00:53:13.470605 systemd[1]: Stopped target sysinit.target. May 17 00:53:13.474207 systemd[1]: Stopped target local-fs.target. May 17 00:53:13.478236 systemd[1]: Stopped target local-fs-pre.target. May 17 00:53:13.481970 systemd[1]: Stopped target swap.target. May 17 00:53:13.489000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.485527 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 17 00:53:13.485671 systemd[1]: Stopped dracut-pre-mount.service. May 17 00:53:13.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.489490 systemd[1]: Stopped target cryptsetup.target. May 17 00:53:13.493036 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 17 00:53:13.506000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.493179 systemd[1]: Stopped dracut-initqueue.service. May 17 00:53:13.511000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.498728 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 17 00:53:13.522000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.498857 systemd[1]: Stopped initrd-setup-root-after-ignition.service. May 17 00:53:13.502865 systemd[1]: ignition-files.service: Deactivated successfully. May 17 00:53:13.503025 systemd[1]: Stopped ignition-files.service. May 17 00:53:13.534847 ignition[1024]: INFO : Ignition 2.14.0 May 17 00:53:13.534847 ignition[1024]: INFO : Stage: umount May 17 00:53:13.534847 ignition[1024]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" May 17 00:53:13.534847 ignition[1024]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 May 17 00:53:13.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.546000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.507161 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 17 00:53:13.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.558611 ignition[1024]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 17 00:53:13.558611 ignition[1024]: INFO : umount: umount passed May 17 00:53:13.558611 ignition[1024]: INFO : Ignition finished successfully May 17 00:53:13.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.507285 systemd[1]: Stopped flatcar-metadata-hostname.service. May 17 00:53:13.512834 systemd[1]: Stopping ignition-mount.service... May 17 00:53:13.515758 systemd[1]: Stopping iscsiuio.service... May 17 00:53:13.518425 systemd[1]: Stopping sysroot-boot.service... May 17 00:53:13.520112 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 17 00:53:13.520290 systemd[1]: Stopped systemd-udev-trigger.service. May 17 00:53:13.522777 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 17 00:53:13.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.522929 systemd[1]: Stopped dracut-pre-trigger.service. May 17 00:53:13.539072 systemd[1]: iscsiuio.service: Deactivated successfully. May 17 00:53:13.539191 systemd[1]: Stopped iscsiuio.service. May 17 00:53:13.551981 systemd[1]: ignition-mount.service: Deactivated successfully. May 17 00:53:13.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.552065 systemd[1]: Stopped ignition-mount.service. May 17 00:53:13.554284 systemd[1]: ignition-disks.service: Deactivated successfully. May 17 00:53:13.554378 systemd[1]: Stopped ignition-disks.service. May 17 00:53:13.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.624000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.558603 systemd[1]: ignition-kargs.service: Deactivated successfully. May 17 00:53:13.558653 systemd[1]: Stopped ignition-kargs.service. May 17 00:53:13.563349 systemd[1]: ignition-fetch.service: Deactivated successfully. May 17 00:53:13.563389 systemd[1]: Stopped ignition-fetch.service. May 17 00:53:13.567059 systemd[1]: Stopped target network.target. May 17 00:53:13.640000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.570916 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 17 00:53:13.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.647000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.570970 systemd[1]: Stopped ignition-fetch-offline.service. May 17 00:53:13.572955 systemd[1]: Stopped target paths.target. May 17 00:53:13.574816 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 17 00:53:13.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.579528 systemd[1]: Stopped systemd-ask-password-console.path. May 17 00:53:13.582467 systemd[1]: Stopped target slices.target. May 17 00:53:13.584204 systemd[1]: Stopped target sockets.target. May 17 00:53:13.588127 systemd[1]: iscsid.socket: Deactivated successfully. May 17 00:53:13.667000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.588163 systemd[1]: Closed iscsid.socket. May 17 00:53:13.592206 systemd[1]: iscsiuio.socket: Deactivated successfully. May 17 00:53:13.671000 audit: BPF prog-id=6 op=UNLOAD May 17 00:53:13.592246 systemd[1]: Closed iscsiuio.socket. May 17 00:53:13.595637 systemd[1]: ignition-setup.service: Deactivated successfully. May 17 00:53:13.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.595688 systemd[1]: Stopped ignition-setup.service. May 17 00:53:13.600083 systemd[1]: Stopping systemd-networkd.service... May 17 00:53:13.603433 systemd[1]: Stopping systemd-resolved.service... May 17 00:53:13.606519 systemd-networkd[824]: eth0: DHCPv6 lease lost May 17 00:53:13.681000 audit: BPF prog-id=9 op=UNLOAD May 17 00:53:13.608805 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 17 00:53:13.609288 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:53:13.609359 systemd[1]: Stopped systemd-networkd.service. May 17 00:53:13.617458 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 17 00:53:13.617563 systemd[1]: Finished initrd-cleanup.service. May 17 00:53:13.628526 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 17 00:53:13.628565 systemd[1]: Closed systemd-networkd.socket. May 17 00:53:13.631810 systemd[1]: Stopping network-cleanup.service... May 17 00:53:13.636841 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 17 00:53:13.636900 systemd[1]: Stopped parse-ip-for-networkd.service. May 17 00:53:13.640532 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 17 00:53:13.640584 systemd[1]: Stopped systemd-sysctl.service. May 17 00:53:13.642579 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 17 00:53:13.642619 systemd[1]: Stopped systemd-modules-load.service. May 17 00:53:13.647678 systemd[1]: Stopping systemd-udevd.service... May 17 00:53:13.654423 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 17 00:53:13.654936 systemd[1]: systemd-resolved.service: Deactivated successfully. May 17 00:53:13.655027 systemd[1]: Stopped systemd-resolved.service. May 17 00:53:13.664154 systemd[1]: systemd-udevd.service: Deactivated successfully. May 17 00:53:13.664271 systemd[1]: Stopped systemd-udevd.service. May 17 00:53:13.670407 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 17 00:53:13.670448 systemd[1]: Closed systemd-udevd-control.socket. May 17 00:53:13.673608 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 17 00:53:13.673643 systemd[1]: Closed systemd-udevd-kernel.socket. May 17 00:53:13.677338 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 17 00:53:13.677387 systemd[1]: Stopped dracut-pre-udev.service. May 17 00:53:13.681582 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 17 00:53:13.681632 systemd[1]: Stopped dracut-cmdline.service. May 17 00:53:13.765214 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 17 00:53:13.771152 kernel: hv_netvsc 7c1e521e-4cae-7c1e-521e-4cae7c1e521e eth0: Data path switched from VF: enP420s1 May 17 00:53:13.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.765365 systemd[1]: Stopped dracut-cmdline-ask.service. May 17 00:53:13.773000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.774484 systemd[1]: Starting initrd-udevadm-cleanup-db.service... May 17 00:53:13.779164 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 17 00:53:13.781560 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. May 17 00:53:13.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.784061 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 17 00:53:13.788000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.792000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.784106 systemd[1]: Stopped kmod-static-nodes.service. May 17 00:53:13.788157 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 17 00:53:13.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.798000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:13.788209 systemd[1]: Stopped systemd-vconsole-setup.service. May 17 00:53:13.793664 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 17 00:53:13.794185 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 17 00:53:13.794276 systemd[1]: Finished initrd-udevadm-cleanup-db.service. May 17 00:53:13.798771 systemd[1]: network-cleanup.service: Deactivated successfully. May 17 00:53:13.798855 systemd[1]: Stopped network-cleanup.service. May 17 00:53:14.099916 systemd[1]: sysroot-boot.service: Deactivated successfully. May 17 00:53:14.100033 systemd[1]: Stopped sysroot-boot.service. May 17 00:53:14.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:14.106325 systemd[1]: Reached target initrd-switch-root.target. May 17 00:53:14.110575 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 17 00:53:14.110639 systemd[1]: Stopped initrd-setup-root.service. May 17 00:53:14.116000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:14.117247 systemd[1]: Starting initrd-switch-root.service... May 17 00:53:14.128026 systemd[1]: Switching root. May 17 00:53:14.154936 iscsid[835]: iscsid shutting down. May 17 00:53:14.157186 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). May 17 00:53:14.157248 systemd-journald[183]: Journal stopped May 17 00:53:28.838130 kernel: SELinux: Class mctp_socket not defined in policy. May 17 00:53:28.838162 kernel: SELinux: Class anon_inode not defined in policy. May 17 00:53:28.838173 kernel: SELinux: the above unknown classes and permissions will be allowed May 17 00:53:28.838184 kernel: SELinux: policy capability network_peer_controls=1 May 17 00:53:28.838192 kernel: SELinux: policy capability open_perms=1 May 17 00:53:28.838203 kernel: SELinux: policy capability extended_socket_class=1 May 17 00:53:28.838212 kernel: SELinux: policy capability always_check_network=0 May 17 00:53:28.838225 kernel: SELinux: policy capability cgroup_seclabel=1 May 17 00:53:28.838236 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 17 00:53:28.838244 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 17 00:53:28.838253 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 17 00:53:28.838263 kernel: kauditd_printk_skb: 42 callbacks suppressed May 17 00:53:28.838275 kernel: audit: type=1403 audit(1747443196.560:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:53:28.838285 systemd[1]: Successfully loaded SELinux policy in 271.928ms. May 17 00:53:28.838301 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 19.614ms. May 17 00:53:28.838314 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) May 17 00:53:28.838324 systemd[1]: Detected virtualization microsoft. May 17 00:53:28.838335 systemd[1]: Detected architecture x86-64. May 17 00:53:28.838347 systemd[1]: Detected first boot. May 17 00:53:28.838359 systemd[1]: Hostname set to . May 17 00:53:28.838371 systemd[1]: Initializing machine ID from random generator. May 17 00:53:28.838383 kernel: audit: type=1400 audit(1747443197.127:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:53:28.838393 kernel: audit: type=1400 audit(1747443197.170:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:53:28.838405 kernel: audit: type=1400 audit(1747443197.170:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:53:28.838416 kernel: audit: type=1334 audit(1747443197.184:85): prog-id=10 op=LOAD May 17 00:53:28.838427 kernel: audit: type=1334 audit(1747443197.184:86): prog-id=10 op=UNLOAD May 17 00:53:28.838439 kernel: audit: type=1334 audit(1747443197.196:87): prog-id=11 op=LOAD May 17 00:53:28.838449 kernel: audit: type=1334 audit(1747443197.196:88): prog-id=11 op=UNLOAD May 17 00:53:28.838460 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). May 17 00:53:28.838485 kernel: audit: type=1400 audit(1747443198.521:89): avc: denied { associate } for pid=1058 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:53:28.838497 kernel: audit: type=1300 audit(1747443198.521:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1041 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:28.838509 systemd[1]: Populated /etc with preset unit settings. May 17 00:53:28.838523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:53:28.838533 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:53:28.838546 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:53:28.838559 kernel: kauditd_printk_skb: 7 callbacks suppressed May 17 00:53:28.838568 kernel: audit: type=1334 audit(1747443208.348:91): prog-id=12 op=LOAD May 17 00:53:28.838579 kernel: audit: type=1334 audit(1747443208.348:92): prog-id=3 op=UNLOAD May 17 00:53:28.838588 kernel: audit: type=1334 audit(1747443208.352:93): prog-id=13 op=LOAD May 17 00:53:28.838601 kernel: audit: type=1334 audit(1747443208.356:94): prog-id=14 op=LOAD May 17 00:53:28.838614 kernel: audit: type=1334 audit(1747443208.356:95): prog-id=4 op=UNLOAD May 17 00:53:28.838628 kernel: audit: type=1334 audit(1747443208.356:96): prog-id=5 op=UNLOAD May 17 00:53:28.838638 kernel: audit: type=1334 audit(1747443208.361:97): prog-id=15 op=LOAD May 17 00:53:28.838649 kernel: audit: type=1334 audit(1747443208.361:98): prog-id=12 op=UNLOAD May 17 00:53:28.838659 kernel: audit: type=1334 audit(1747443208.366:99): prog-id=16 op=LOAD May 17 00:53:28.838670 kernel: audit: type=1334 audit(1747443208.370:100): prog-id=17 op=LOAD May 17 00:53:28.838680 systemd[1]: iscsid.service: Deactivated successfully. May 17 00:53:28.838692 systemd[1]: Stopped iscsid.service. May 17 00:53:28.838707 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 17 00:53:28.838718 systemd[1]: Stopped initrd-switch-root.service. May 17 00:53:28.838730 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 17 00:53:28.838742 systemd[1]: Created slice system-addon\x2dconfig.slice. May 17 00:53:28.838752 systemd[1]: Created slice system-addon\x2drun.slice. May 17 00:53:28.838764 systemd[1]: Created slice system-getty.slice. May 17 00:53:28.838776 systemd[1]: Created slice system-modprobe.slice. May 17 00:53:28.838786 systemd[1]: Created slice system-serial\x2dgetty.slice. May 17 00:53:28.838800 systemd[1]: Created slice system-system\x2dcloudinit.slice. May 17 00:53:28.838813 systemd[1]: Created slice system-systemd\x2dfsck.slice. May 17 00:53:28.838823 systemd[1]: Created slice user.slice. May 17 00:53:28.838835 systemd[1]: Started systemd-ask-password-console.path. May 17 00:53:28.838847 systemd[1]: Started systemd-ask-password-wall.path. May 17 00:53:28.838857 systemd[1]: Set up automount boot.automount. May 17 00:53:28.838870 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. May 17 00:53:28.838882 systemd[1]: Stopped target initrd-switch-root.target. May 17 00:53:28.838892 systemd[1]: Stopped target initrd-fs.target. May 17 00:53:28.838906 systemd[1]: Stopped target initrd-root-fs.target. May 17 00:53:28.838919 systemd[1]: Reached target integritysetup.target. May 17 00:53:28.838928 systemd[1]: Reached target remote-cryptsetup.target. May 17 00:53:28.838941 systemd[1]: Reached target remote-fs.target. May 17 00:53:28.838953 systemd[1]: Reached target slices.target. May 17 00:53:28.838963 systemd[1]: Reached target swap.target. May 17 00:53:28.838975 systemd[1]: Reached target torcx.target. May 17 00:53:28.838988 systemd[1]: Reached target veritysetup.target. May 17 00:53:28.839000 systemd[1]: Listening on systemd-coredump.socket. May 17 00:53:28.839013 systemd[1]: Listening on systemd-initctl.socket. May 17 00:53:28.839025 systemd[1]: Listening on systemd-networkd.socket. May 17 00:53:28.839036 systemd[1]: Listening on systemd-udevd-control.socket. May 17 00:53:28.839050 systemd[1]: Listening on systemd-udevd-kernel.socket. May 17 00:53:28.839062 systemd[1]: Listening on systemd-userdbd.socket. May 17 00:53:28.839074 systemd[1]: Mounting dev-hugepages.mount... May 17 00:53:28.839085 systemd[1]: Mounting dev-mqueue.mount... May 17 00:53:28.839098 systemd[1]: Mounting media.mount... May 17 00:53:28.839108 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:28.839120 systemd[1]: Mounting sys-kernel-debug.mount... May 17 00:53:28.839133 systemd[1]: Mounting sys-kernel-tracing.mount... May 17 00:53:28.839143 systemd[1]: Mounting tmp.mount... May 17 00:53:28.839157 systemd[1]: Starting flatcar-tmpfiles.service... May 17 00:53:28.839169 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:53:28.839180 systemd[1]: Starting kmod-static-nodes.service... May 17 00:53:28.839192 systemd[1]: Starting modprobe@configfs.service... May 17 00:53:28.839205 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:53:28.839215 systemd[1]: Starting modprobe@drm.service... May 17 00:53:28.839228 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:53:28.839240 systemd[1]: Starting modprobe@fuse.service... May 17 00:53:28.839251 systemd[1]: Starting modprobe@loop.service... May 17 00:53:28.839265 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 17 00:53:28.839278 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 17 00:53:28.839289 systemd[1]: Stopped systemd-fsck-root.service. May 17 00:53:28.839300 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 17 00:53:28.839313 systemd[1]: Stopped systemd-fsck-usr.service. May 17 00:53:28.839323 systemd[1]: Stopped systemd-journald.service. May 17 00:53:28.839335 systemd[1]: Starting systemd-journald.service... May 17 00:53:28.839348 systemd[1]: Starting systemd-modules-load.service... May 17 00:53:28.839358 systemd[1]: Starting systemd-network-generator.service... May 17 00:53:28.839372 systemd[1]: Starting systemd-remount-fs.service... May 17 00:53:28.839385 systemd[1]: Starting systemd-udev-trigger.service... May 17 00:53:28.839395 systemd[1]: verity-setup.service: Deactivated successfully. May 17 00:53:28.839407 systemd[1]: Stopped verity-setup.service. May 17 00:53:28.839420 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:28.839430 kernel: fuse: init (API version 7.34) May 17 00:53:28.839441 kernel: loop: module loaded May 17 00:53:28.839453 systemd[1]: Mounted dev-hugepages.mount. May 17 00:53:28.839463 systemd[1]: Mounted dev-mqueue.mount. May 17 00:53:28.839485 systemd[1]: Mounted media.mount. May 17 00:53:28.839496 systemd[1]: Mounted sys-kernel-debug.mount. May 17 00:53:28.839508 systemd[1]: Mounted sys-kernel-tracing.mount. May 17 00:53:28.839520 systemd[1]: Mounted tmp.mount. May 17 00:53:28.839530 systemd[1]: Finished flatcar-tmpfiles.service. May 17 00:53:28.839548 systemd[1]: Finished kmod-static-nodes.service. May 17 00:53:28.839563 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 17 00:53:28.839579 systemd-journald[1160]: Journal started May 17 00:53:28.839632 systemd-journald[1160]: Runtime Journal (/run/log/journal/c3aeb33950ff4126a24ed24e02b863b8) is 8.0M, max 159.0M, 151.0M free. May 17 00:53:16.560000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 May 17 00:53:17.127000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:53:17.170000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:53:17.170000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 May 17 00:53:17.184000 audit: BPF prog-id=10 op=LOAD May 17 00:53:17.184000 audit: BPF prog-id=10 op=UNLOAD May 17 00:53:17.196000 audit: BPF prog-id=11 op=LOAD May 17 00:53:17.196000 audit: BPF prog-id=11 op=UNLOAD May 17 00:53:18.521000 audit[1058]: AVC avc: denied { associate } for pid=1058 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" May 17 00:53:18.521000 audit[1058]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078c2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1041 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:18.521000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:53:18.535000 audit[1058]: AVC avc: denied { associate } for pid=1058 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 May 17 00:53:18.535000 audit[1058]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c000107999 a2=1ed a3=0 items=2 ppid=1041 pid=1058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:18.535000 audit: CWD cwd="/" May 17 00:53:18.535000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:18.535000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:18.535000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 May 17 00:53:28.348000 audit: BPF prog-id=12 op=LOAD May 17 00:53:28.348000 audit: BPF prog-id=3 op=UNLOAD May 17 00:53:28.352000 audit: BPF prog-id=13 op=LOAD May 17 00:53:28.356000 audit: BPF prog-id=14 op=LOAD May 17 00:53:28.356000 audit: BPF prog-id=4 op=UNLOAD May 17 00:53:28.356000 audit: BPF prog-id=5 op=UNLOAD May 17 00:53:28.361000 audit: BPF prog-id=15 op=LOAD May 17 00:53:28.361000 audit: BPF prog-id=12 op=UNLOAD May 17 00:53:28.366000 audit: BPF prog-id=16 op=LOAD May 17 00:53:28.370000 audit: BPF prog-id=17 op=LOAD May 17 00:53:28.370000 audit: BPF prog-id=13 op=UNLOAD May 17 00:53:28.370000 audit: BPF prog-id=14 op=UNLOAD May 17 00:53:28.375000 audit: BPF prog-id=18 op=LOAD May 17 00:53:28.375000 audit: BPF prog-id=15 op=UNLOAD May 17 00:53:28.393000 audit: BPF prog-id=19 op=LOAD May 17 00:53:28.398000 audit: BPF prog-id=20 op=LOAD May 17 00:53:28.398000 audit: BPF prog-id=16 op=UNLOAD May 17 00:53:28.398000 audit: BPF prog-id=17 op=UNLOAD May 17 00:53:28.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.413000 audit: BPF prog-id=18 op=UNLOAD May 17 00:53:28.419000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.725000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.725000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.726000 audit: BPF prog-id=21 op=LOAD May 17 00:53:28.726000 audit: BPF prog-id=22 op=LOAD May 17 00:53:28.726000 audit: BPF prog-id=23 op=LOAD May 17 00:53:28.726000 audit: BPF prog-id=19 op=UNLOAD May 17 00:53:28.726000 audit: BPF prog-id=20 op=UNLOAD May 17 00:53:28.782000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.834000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 May 17 00:53:28.834000 audit[1160]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fff4f9391e0 a2=4000 a3=7fff4f93927c items=0 ppid=1 pid=1160 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:28.834000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" May 17 00:53:28.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.347180 systemd[1]: Queued start job for default target multi-user.target. May 17 00:53:18.468053 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:53:28.347191 systemd[1]: Unnecessary job was removed for dev-sda6.device. May 17 00:53:18.478825 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:53:28.399330 systemd[1]: systemd-journald.service: Deactivated successfully. May 17 00:53:18.478851 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:53:18.478894 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" May 17 00:53:18.478908 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="skipped missing lower profile" missing profile=oem May 17 00:53:18.478956 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" May 17 00:53:18.478975 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= May 17 00:53:18.479221 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack May 17 00:53:18.479271 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json May 17 00:53:18.479286 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json May 17 00:53:18.510648 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 May 17 00:53:18.510727 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl May 17 00:53:18.510755 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.7: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.7 May 17 00:53:18.510781 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store May 17 00:53:18.510807 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.7: no such file or directory" path=/var/lib/torcx/store/3510.3.7 May 17 00:53:18.510823 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store May 17 00:53:27.244105 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:53:27.244359 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:53:27.244513 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:53:27.244727 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl May 17 00:53:27.244786 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= May 17 00:53:27.244839 /usr/lib/systemd/system-generators/torcx-generator[1058]: time="2025-05-17T00:53:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx May 17 00:53:28.846747 systemd[1]: Finished modprobe@configfs.service. May 17 00:53:28.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.852237 systemd[1]: Started systemd-journald.service. May 17 00:53:28.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.852747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:53:28.852902 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:53:28.854000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.855281 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:53:28.855445 systemd[1]: Finished modprobe@drm.service. May 17 00:53:28.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.856000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.857679 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:53:28.857987 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:53:28.859000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.859000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.860185 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 17 00:53:28.860550 systemd[1]: Finished modprobe@fuse.service. May 17 00:53:28.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.862621 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:53:28.862944 systemd[1]: Finished modprobe@loop.service. May 17 00:53:28.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.865094 systemd[1]: Finished systemd-modules-load.service. May 17 00:53:28.866000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.867341 systemd[1]: Finished systemd-network-generator.service. May 17 00:53:28.869000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.869747 systemd[1]: Finished systemd-remount-fs.service. May 17 00:53:28.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.872020 systemd[1]: Reached target network-pre.target. May 17 00:53:28.875001 systemd[1]: Mounting sys-fs-fuse-connections.mount... May 17 00:53:28.879576 systemd[1]: Mounting sys-kernel-config.mount... May 17 00:53:28.881657 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 17 00:53:28.894349 systemd[1]: Starting systemd-hwdb-update.service... May 17 00:53:28.897839 systemd[1]: Starting systemd-journal-flush.service... May 17 00:53:28.900307 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:53:28.901787 systemd[1]: Starting systemd-random-seed.service... May 17 00:53:28.903961 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:53:28.905462 systemd[1]: Starting systemd-sysctl.service... May 17 00:53:28.909832 systemd[1]: Starting systemd-sysusers.service... May 17 00:53:28.917034 systemd[1]: Mounted sys-fs-fuse-connections.mount. May 17 00:53:28.919716 systemd[1]: Mounted sys-kernel-config.mount. May 17 00:53:28.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.930594 systemd[1]: Finished systemd-random-seed.service. May 17 00:53:28.933011 systemd[1]: Reached target first-boot-complete.target. May 17 00:53:28.937241 systemd-journald[1160]: Time spent on flushing to /var/log/journal/c3aeb33950ff4126a24ed24e02b863b8 is 19.312ms for 1160 entries. May 17 00:53:28.937241 systemd-journald[1160]: System Journal (/var/log/journal/c3aeb33950ff4126a24ed24e02b863b8) is 8.0M, max 2.6G, 2.6G free. May 17 00:53:29.109658 systemd-journald[1160]: Received client request to flush runtime journal. May 17 00:53:28.950000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:28.949118 systemd[1]: Finished systemd-sysctl.service. May 17 00:53:29.110251 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. May 17 00:53:28.959224 systemd[1]: Finished systemd-udev-trigger.service. May 17 00:53:28.962221 systemd[1]: Starting systemd-udev-settle.service... May 17 00:53:29.111004 systemd[1]: Finished systemd-journal-flush.service. May 17 00:53:29.110000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:29.647381 systemd[1]: Finished systemd-sysusers.service. May 17 00:53:29.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:29.650971 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... May 17 00:53:29.895000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:29.893617 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. May 17 00:53:30.120850 systemd[1]: Finished systemd-hwdb-update.service. May 17 00:53:30.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:30.123000 audit: BPF prog-id=24 op=LOAD May 17 00:53:30.123000 audit: BPF prog-id=25 op=LOAD May 17 00:53:30.123000 audit: BPF prog-id=7 op=UNLOAD May 17 00:53:30.123000 audit: BPF prog-id=8 op=UNLOAD May 17 00:53:30.124665 systemd[1]: Starting systemd-udevd.service... May 17 00:53:30.141667 systemd-udevd[1187]: Using default interface naming scheme 'v252'. May 17 00:53:30.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:30.309000 audit: BPF prog-id=26 op=LOAD May 17 00:53:30.304243 systemd[1]: Started systemd-udevd.service. May 17 00:53:30.311642 systemd[1]: Starting systemd-networkd.service... May 17 00:53:30.354663 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. May 17 00:53:30.413494 kernel: mousedev: PS/2 mouse device common for all mice May 17 00:53:30.416000 audit[1197]: AVC avc: denied { confidentiality } for pid=1197 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 May 17 00:53:30.427512 kernel: hv_vmbus: registering driver hv_balloon May 17 00:53:30.437897 kernel: hv_utils: Registering HyperV Utility Driver May 17 00:53:30.437986 kernel: hv_vmbus: registering driver hv_utils May 17 00:53:30.442265 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 17 00:53:30.453509 kernel: hv_vmbus: registering driver hyperv_fb May 17 00:53:30.416000 audit[1197]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=559a7ac7ed30 a1=f884 a2=7f6633272bc5 a3=5 items=12 ppid=1187 pid=1197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:30.416000 audit: CWD cwd="/" May 17 00:53:30.416000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=1 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=2 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=3 name=(null) inode=15113 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=4 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=5 name=(null) inode=15114 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=6 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=7 name=(null) inode=15115 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=8 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=9 name=(null) inode=15116 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=10 name=(null) inode=15112 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PATH item=11 name=(null) inode=15117 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 May 17 00:53:30.416000 audit: PROCTITLE proctitle="(udev-worker)" May 17 00:53:30.474759 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 17 00:53:30.474823 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 17 00:53:30.474886 kernel: hv_utils: Shutdown IC version 3.2 May 17 00:53:31.101661 kernel: hv_utils: Heartbeat IC version 3.0 May 17 00:53:31.101740 kernel: hv_utils: TimeSync IC version 4.0 May 17 00:53:31.101779 kernel: Console: switching to colour dummy device 80x25 May 17 00:53:31.107841 kernel: Console: switching to colour frame buffer device 128x48 May 17 00:53:31.234000 audit: BPF prog-id=27 op=LOAD May 17 00:53:31.234000 audit: BPF prog-id=28 op=LOAD May 17 00:53:31.234000 audit: BPF prog-id=29 op=LOAD May 17 00:53:31.237100 systemd[1]: Starting systemd-userdbd.service... May 17 00:53:31.291526 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. May 17 00:53:31.347151 systemd[1]: Started systemd-userdbd.service. May 17 00:53:31.348000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:31.372385 kernel: KVM: vmx: using Hyper-V Enlightened VMCS May 17 00:53:31.460806 systemd[1]: Finished systemd-udev-settle.service. May 17 00:53:31.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:31.464140 systemd[1]: Starting lvm2-activation-early.service... May 17 00:53:31.577304 systemd-networkd[1196]: lo: Link UP May 17 00:53:31.577315 systemd-networkd[1196]: lo: Gained carrier May 17 00:53:31.577994 systemd-networkd[1196]: Enumeration completed May 17 00:53:31.578115 systemd[1]: Started systemd-networkd.service. May 17 00:53:31.578000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:31.581553 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:53:31.603464 systemd-networkd[1196]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:53:31.659391 kernel: mlx5_core 01a4:00:02.0 enP420s1: Link up May 17 00:53:31.679390 kernel: hv_netvsc 7c1e521e-4cae-7c1e-521e-4cae7c1e521e eth0: Data path switched to VF: enP420s1 May 17 00:53:31.680008 systemd-networkd[1196]: enP420s1: Link UP May 17 00:53:31.680325 systemd-networkd[1196]: eth0: Link UP May 17 00:53:31.680601 systemd-networkd[1196]: eth0: Gained carrier May 17 00:53:31.685343 systemd-networkd[1196]: enP420s1: Gained carrier May 17 00:53:31.718597 systemd-networkd[1196]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:53:31.776266 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:53:31.806707 systemd[1]: Finished lvm2-activation-early.service. May 17 00:53:31.808000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:31.810378 systemd[1]: Reached target cryptsetup.target. May 17 00:53:31.814358 systemd[1]: Starting lvm2-activation.service... May 17 00:53:31.819618 lvm[1266]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 17 00:53:31.842208 systemd[1]: Finished lvm2-activation.service. May 17 00:53:31.843000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:31.844776 systemd[1]: Reached target local-fs-pre.target. May 17 00:53:31.847071 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 17 00:53:31.847099 systemd[1]: Reached target local-fs.target. May 17 00:53:31.849356 systemd[1]: Reached target machines.target. May 17 00:53:31.852407 systemd[1]: Starting ldconfig.service... May 17 00:53:32.146670 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:53:32.146893 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:32.148580 systemd[1]: Starting systemd-boot-update.service... May 17 00:53:32.152430 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... May 17 00:53:32.156441 systemd[1]: Starting systemd-machine-id-commit.service... May 17 00:53:32.159823 systemd[1]: Starting systemd-sysext.service... May 17 00:53:32.224538 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. May 17 00:53:32.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.242493 systemd[1]: Unmounting usr-share-oem.mount... May 17 00:53:32.266583 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1268 (bootctl) May 17 00:53:32.268977 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... May 17 00:53:32.316878 systemd[1]: usr-share-oem.mount: Deactivated successfully. May 17 00:53:32.317094 systemd[1]: Unmounted usr-share-oem.mount. May 17 00:53:32.426388 kernel: loop0: detected capacity change from 0 to 221472 May 17 00:53:32.784399 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 17 00:53:32.808402 kernel: loop1: detected capacity change from 0 to 221472 May 17 00:53:32.820411 (sd-sysext)[1280]: Using extensions 'kubernetes'. May 17 00:53:32.823064 (sd-sysext)[1280]: Merged extensions into '/usr'. May 17 00:53:32.836314 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 17 00:53:32.837060 systemd[1]: Finished systemd-machine-id-commit.service. May 17 00:53:32.838000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.844198 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:32.845806 systemd[1]: Mounting usr-share-oem.mount... May 17 00:53:32.848273 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:53:32.850548 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:53:32.853734 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:53:32.857584 systemd[1]: Starting modprobe@loop.service... May 17 00:53:32.859505 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:53:32.859684 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:32.859824 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:32.862196 systemd[1]: Mounted usr-share-oem.mount. May 17 00:53:32.864516 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:53:32.864655 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:53:32.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.867376 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:53:32.867527 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:53:32.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.870082 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:53:32.870218 systemd[1]: Finished modprobe@loop.service. May 17 00:53:32.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.870000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.872753 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:53:32.872891 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:53:32.873968 systemd[1]: Finished systemd-sysext.service. May 17 00:53:32.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:32.877020 systemd[1]: Starting ensure-sysext.service... May 17 00:53:32.880122 systemd[1]: Starting systemd-tmpfiles-setup.service... May 17 00:53:32.887910 systemd[1]: Reloading. May 17 00:53:32.953256 /usr/lib/systemd/system-generators/torcx-generator[1309]: time="2025-05-17T00:53:32Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:53:32.953291 /usr/lib/systemd/system-generators/torcx-generator[1309]: time="2025-05-17T00:53:32Z" level=info msg="torcx already run" May 17 00:53:32.977497 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. May 17 00:53:32.991000 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 17 00:53:33.003543 systemd-tmpfiles[1287]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 17 00:53:33.039959 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:53:33.039979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:53:33.056496 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:53:33.121000 audit: BPF prog-id=30 op=LOAD May 17 00:53:33.121000 audit: BPF prog-id=31 op=LOAD May 17 00:53:33.121000 audit: BPF prog-id=24 op=UNLOAD May 17 00:53:33.121000 audit: BPF prog-id=25 op=UNLOAD May 17 00:53:33.124000 audit: BPF prog-id=32 op=LOAD May 17 00:53:33.124000 audit: BPF prog-id=26 op=UNLOAD May 17 00:53:33.125000 audit: BPF prog-id=33 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=27 op=UNLOAD May 17 00:53:33.125000 audit: BPF prog-id=34 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=35 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=28 op=UNLOAD May 17 00:53:33.125000 audit: BPF prog-id=29 op=UNLOAD May 17 00:53:33.125000 audit: BPF prog-id=36 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=21 op=UNLOAD May 17 00:53:33.125000 audit: BPF prog-id=37 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=38 op=LOAD May 17 00:53:33.125000 audit: BPF prog-id=22 op=UNLOAD May 17 00:53:33.126000 audit: BPF prog-id=23 op=UNLOAD May 17 00:53:33.140233 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:33.140521 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:53:33.141833 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:53:33.144325 systemd[1]: Starting modprobe@efi_pstore.service... May 17 00:53:33.147125 systemd[1]: Starting modprobe@loop.service... May 17 00:53:33.148122 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:53:33.148332 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:33.148556 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:33.152474 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:53:33.152628 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:53:33.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.154000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.154614 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:53:33.154742 systemd[1]: Finished modprobe@loop.service. May 17 00:53:33.158459 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 17 00:53:33.158600 systemd[1]: Finished modprobe@efi_pstore.service. May 17 00:53:33.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.160000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.160823 systemd[1]: Finished ensure-sysext.service. May 17 00:53:33.162588 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:33.162898 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. May 17 00:53:33.164006 systemd[1]: Starting modprobe@dm_mod.service... May 17 00:53:33.165960 systemd[1]: Starting modprobe@drm.service... May 17 00:53:33.168038 systemd[1]: Starting modprobe@loop.service... May 17 00:53:33.169193 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. May 17 00:53:33.169278 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:33.169346 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 17 00:53:33.169463 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). May 17 00:53:33.171861 systemd[1]: modprobe@loop.service: Deactivated successfully. May 17 00:53:33.172040 systemd[1]: Finished modprobe@loop.service. May 17 00:53:33.173322 systemd[1]: modprobe@drm.service: Deactivated successfully. May 17 00:53:33.173445 systemd[1]: Finished modprobe@drm.service. May 17 00:53:33.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.171000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.176821 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 17 00:53:33.176949 systemd[1]: Finished modprobe@dm_mod.service. May 17 00:53:33.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.178313 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. May 17 00:53:33.301404 systemd-fsck[1277]: fsck.fat 4.2 (2021-01-31) May 17 00:53:33.301404 systemd-fsck[1277]: /dev/sda1: 790 files, 120726/258078 clusters May 17 00:53:33.303581 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. May 17 00:53:33.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.308532 systemd[1]: Mounting boot.mount... May 17 00:53:33.322820 systemd[1]: Mounted boot.mount. May 17 00:53:33.338943 systemd[1]: Finished systemd-boot-update.service. May 17 00:53:33.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.440482 systemd-networkd[1196]: eth0: Gained IPv6LL May 17 00:53:33.447071 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:53:33.449000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.555705 systemd[1]: Finished systemd-tmpfiles-setup.service. May 17 00:53:33.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.559698 systemd[1]: Starting audit-rules.service... May 17 00:53:33.563114 systemd[1]: Starting clean-ca-certificates.service... May 17 00:53:33.567127 systemd[1]: Starting systemd-journal-catalog-update.service... May 17 00:53:33.569000 audit: BPF prog-id=39 op=LOAD May 17 00:53:33.571559 systemd[1]: Starting systemd-resolved.service... May 17 00:53:33.573000 audit: BPF prog-id=40 op=LOAD May 17 00:53:33.576889 systemd[1]: Starting systemd-timesyncd.service... May 17 00:53:33.580130 systemd[1]: Starting systemd-update-utmp.service... May 17 00:53:33.593000 audit[1386]: SYSTEM_BOOT pid=1386 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' May 17 00:53:33.596910 systemd[1]: Finished systemd-update-utmp.service. May 17 00:53:33.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.619446 systemd[1]: Finished clean-ca-certificates.service. May 17 00:53:33.620000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.621913 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 17 00:53:33.704559 systemd[1]: Started systemd-timesyncd.service. May 17 00:53:33.705000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.707145 systemd[1]: Reached target time-set.target. May 17 00:53:33.720788 systemd-resolved[1384]: Positive Trust Anchors: May 17 00:53:33.720802 systemd-resolved[1384]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 17 00:53:33.720843 systemd-resolved[1384]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test May 17 00:53:33.724581 systemd[1]: Finished systemd-journal-catalog-update.service. May 17 00:53:33.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.812482 systemd-resolved[1384]: Using system hostname 'ci-3510.3.7-n-34d8c498b2'. May 17 00:53:33.814174 systemd[1]: Started systemd-resolved.service. May 17 00:53:33.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' May 17 00:53:33.816580 systemd[1]: Reached target network.target. May 17 00:53:33.818509 systemd[1]: Reached target network-online.target. May 17 00:53:33.820724 systemd[1]: Reached target nss-lookup.target. May 17 00:53:33.865000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 May 17 00:53:33.865000 audit[1401]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcbdc908a0 a2=420 a3=0 items=0 ppid=1380 pid=1401 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) May 17 00:53:33.866879 augenrules[1401]: No rules May 17 00:53:33.865000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 May 17 00:53:33.867906 systemd[1]: Finished audit-rules.service. May 17 00:53:33.873234 systemd-timesyncd[1385]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). May 17 00:53:33.873304 systemd-timesyncd[1385]: Initial clock synchronization to Sat 2025-05-17 00:53:33.874682 UTC. May 17 00:53:39.058902 ldconfig[1267]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 17 00:53:39.071215 systemd[1]: Finished ldconfig.service. May 17 00:53:39.076938 systemd[1]: Starting systemd-update-done.service... May 17 00:53:39.089921 systemd[1]: Finished systemd-update-done.service. May 17 00:53:39.092266 systemd[1]: Reached target sysinit.target. May 17 00:53:39.094352 systemd[1]: Started motdgen.path. May 17 00:53:39.096237 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. May 17 00:53:39.099120 systemd[1]: Started logrotate.timer. May 17 00:53:39.100926 systemd[1]: Started mdadm.timer. May 17 00:53:39.102634 systemd[1]: Started systemd-tmpfiles-clean.timer. May 17 00:53:39.104641 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 17 00:53:39.104678 systemd[1]: Reached target paths.target. May 17 00:53:39.106444 systemd[1]: Reached target timers.target. May 17 00:53:39.108793 systemd[1]: Listening on dbus.socket. May 17 00:53:39.111384 systemd[1]: Starting docker.socket... May 17 00:53:39.115653 systemd[1]: Listening on sshd.socket. May 17 00:53:39.117502 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:39.117934 systemd[1]: Listening on docker.socket. May 17 00:53:39.119760 systemd[1]: Reached target sockets.target. May 17 00:53:39.121572 systemd[1]: Reached target basic.target. May 17 00:53:39.123339 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:53:39.123390 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. May 17 00:53:39.124315 systemd[1]: Starting containerd.service... May 17 00:53:39.127531 systemd[1]: Starting dbus.service... May 17 00:53:39.130273 systemd[1]: Starting enable-oem-cloudinit.service... May 17 00:53:39.133636 systemd[1]: Starting extend-filesystems.service... May 17 00:53:39.135553 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). May 17 00:53:39.150165 systemd[1]: Starting kubelet.service... May 17 00:53:39.152809 systemd[1]: Starting motdgen.service... May 17 00:53:39.155643 systemd[1]: Started nvidia.service. May 17 00:53:39.159304 systemd[1]: Starting prepare-helm.service... May 17 00:53:39.162132 systemd[1]: Starting ssh-key-proc-cmdline.service... May 17 00:53:39.165759 systemd[1]: Starting sshd-keygen.service... May 17 00:53:39.172565 systemd[1]: Starting systemd-logind.service... May 17 00:53:39.174343 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). May 17 00:53:39.174461 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 17 00:53:39.175007 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 17 00:53:39.175843 systemd[1]: Starting update-engine.service... May 17 00:53:39.179966 systemd[1]: Starting update-ssh-keys-after-ignition.service... May 17 00:53:39.188226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 17 00:53:39.188950 systemd[1]: Finished ssh-key-proc-cmdline.service. May 17 00:53:39.220082 jq[1411]: false May 17 00:53:39.219726 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 17 00:53:39.219927 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. May 17 00:53:39.225225 jq[1427]: true May 17 00:53:39.243228 extend-filesystems[1412]: Found loop1 May 17 00:53:39.243228 extend-filesystems[1412]: Found sda May 17 00:53:39.248311 extend-filesystems[1412]: Found sda1 May 17 00:53:39.248311 extend-filesystems[1412]: Found sda2 May 17 00:53:39.248311 extend-filesystems[1412]: Found sda3 May 17 00:53:39.248311 extend-filesystems[1412]: Found usr May 17 00:53:39.248311 extend-filesystems[1412]: Found sda4 May 17 00:53:39.248311 extend-filesystems[1412]: Found sda6 May 17 00:53:39.248311 extend-filesystems[1412]: Found sda7 May 17 00:53:39.248311 extend-filesystems[1412]: Found sda9 May 17 00:53:39.248311 extend-filesystems[1412]: Checking size of /dev/sda9 May 17 00:53:39.269256 jq[1433]: true May 17 00:53:39.281393 tar[1430]: linux-amd64/helm May 17 00:53:39.296426 systemd[1]: motdgen.service: Deactivated successfully. May 17 00:53:39.296632 systemd[1]: Finished motdgen.service. May 17 00:53:39.339968 systemd-logind[1424]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 17 00:53:39.340632 systemd-logind[1424]: New seat seat0. May 17 00:53:39.345740 extend-filesystems[1412]: Old size kept for /dev/sda9 May 17 00:53:39.350564 extend-filesystems[1412]: Found sr0 May 17 00:53:39.352134 systemd[1]: extend-filesystems.service: Deactivated successfully. May 17 00:53:39.352298 systemd[1]: Finished extend-filesystems.service. May 17 00:53:39.398082 env[1438]: time="2025-05-17T00:53:39.397563774Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 May 17 00:53:39.458695 dbus-daemon[1410]: [system] SELinux support is enabled May 17 00:53:39.459329 systemd[1]: Started dbus.service. May 17 00:53:39.464132 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 17 00:53:39.464170 systemd[1]: Reached target system-config.target. May 17 00:53:39.466662 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 17 00:53:39.466687 systemd[1]: Reached target user-config.target. May 17 00:53:39.469995 systemd[1]: Started systemd-logind.service. May 17 00:53:39.470260 dbus-daemon[1410]: [system] Successfully activated service 'org.freedesktop.systemd1' May 17 00:53:39.479384 bash[1464]: Updated "/home/core/.ssh/authorized_keys" May 17 00:53:39.481292 systemd[1]: Finished update-ssh-keys-after-ignition.service. May 17 00:53:39.503991 env[1438]: time="2025-05-17T00:53:39.503335916Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 May 17 00:53:39.511583 env[1438]: time="2025-05-17T00:53:39.511547886Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514159767Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.182-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514201370Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514481490Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514504691Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514523592Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514537193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514631400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.514868116Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.515058030Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 May 17 00:53:39.515084 env[1438]: time="2025-05-17T00:53:39.515078731Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 May 17 00:53:39.515508 env[1438]: time="2025-05-17T00:53:39.515138135Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" May 17 00:53:39.515508 env[1438]: time="2025-05-17T00:53:39.515153236Z" level=info msg="metadata content store policy set" policy=shared May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.529921861Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.529964464Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.529982466Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530028969Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530047970Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530067271Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530130376Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530150577Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530168678Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530186680Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530204481Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530221482Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530328690Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 May 17 00:53:39.530779 env[1438]: time="2025-05-17T00:53:39.530438697Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530825324Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530874627Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530895329Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530960533Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530979435Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.530997436Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531065541Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531083142Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531102043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531118044Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531134746Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531154247Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531297657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531314 env[1438]: time="2025-05-17T00:53:39.531317658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531335859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531352361Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531385063Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531402264Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531425966Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" May 17 00:53:39.531847 env[1438]: time="2025-05-17T00:53:39.531467169Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 May 17 00:53:39.532064 env[1438]: time="2025-05-17T00:53:39.531738387Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" May 17 00:53:39.532064 env[1438]: time="2025-05-17T00:53:39.531813493Z" level=info msg="Connect containerd service" May 17 00:53:39.532064 env[1438]: time="2025-05-17T00:53:39.531857696Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.532627549Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.532911869Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.532961972Z" level=info msg=serving... address=/run/containerd/containerd.sock May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.535821071Z" level=info msg="containerd successfully booted in 0.144521s" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536264002Z" level=info msg="Start subscribing containerd event" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536327406Z" level=info msg="Start recovering state" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536431313Z" level=info msg="Start event monitor" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536452315Z" level=info msg="Start snapshots syncer" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536465416Z" level=info msg="Start cni network conf syncer for default" May 17 00:53:39.570381 env[1438]: time="2025-05-17T00:53:39.536481517Z" level=info msg="Start streaming server" May 17 00:53:39.533083 systemd[1]: Started containerd.service. May 17 00:53:39.571818 systemd[1]: nvidia.service: Deactivated successfully. May 17 00:53:40.074141 update_engine[1425]: I0517 00:53:40.073557 1425 main.cc:92] Flatcar Update Engine starting May 17 00:53:40.134049 systemd[1]: Started update-engine.service. May 17 00:53:40.142501 update_engine[1425]: I0517 00:53:40.136502 1425 update_check_scheduler.cc:74] Next update check in 11m29s May 17 00:53:40.139293 systemd[1]: Started locksmithd.service. May 17 00:53:40.219766 tar[1430]: linux-amd64/LICENSE May 17 00:53:40.219766 tar[1430]: linux-amd64/README.md May 17 00:53:40.226544 systemd[1]: Finished prepare-helm.service. May 17 00:53:40.764520 systemd[1]: Started kubelet.service. May 17 00:53:41.093844 sshd_keygen[1432]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 17 00:53:41.121509 systemd[1]: Finished sshd-keygen.service. May 17 00:53:41.125452 systemd[1]: Starting issuegen.service... May 17 00:53:41.129961 systemd[1]: Started waagent.service. May 17 00:53:41.138740 systemd[1]: issuegen.service: Deactivated successfully. May 17 00:53:41.138920 systemd[1]: Finished issuegen.service. May 17 00:53:41.142303 systemd[1]: Starting systemd-user-sessions.service... May 17 00:53:41.162557 systemd[1]: Finished systemd-user-sessions.service. May 17 00:53:41.166739 systemd[1]: Started getty@tty1.service. May 17 00:53:41.170647 systemd[1]: Started serial-getty@ttyS0.service. May 17 00:53:41.173093 systemd[1]: Reached target getty.target. May 17 00:53:41.175125 systemd[1]: Reached target multi-user.target. May 17 00:53:41.179200 systemd[1]: Starting systemd-update-utmp-runlevel.service... May 17 00:53:41.197166 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. May 17 00:53:41.197340 systemd[1]: Finished systemd-update-utmp-runlevel.service. May 17 00:53:41.202171 systemd[1]: Startup finished in 865ms (firmware) + 23.767s (loader) + 971ms (kernel) + 13.255s (initrd) + 24.613s (userspace) = 1min 3.473s. May 17 00:53:41.453494 kubelet[1520]: E0517 00:53:41.453387 1520 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:53:41.455071 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:53:41.455226 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:53:41.455520 systemd[1]: kubelet.service: Consumed 1.129s CPU time. May 17 00:53:41.477812 locksmithd[1512]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 17 00:53:41.635355 login[1539]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:53:41.636889 login[1540]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 17 00:53:41.680880 systemd[1]: Created slice user-500.slice. May 17 00:53:41.682276 systemd[1]: Starting user-runtime-dir@500.service... May 17 00:53:41.686284 systemd-logind[1424]: New session 1 of user core. May 17 00:53:41.691186 systemd-logind[1424]: New session 2 of user core. May 17 00:53:41.695661 systemd[1]: Finished user-runtime-dir@500.service. May 17 00:53:41.697360 systemd[1]: Starting user@500.service... May 17 00:53:41.700900 (systemd)[1543]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 17 00:53:41.999358 systemd[1543]: Queued start job for default target default.target. May 17 00:53:41.999911 systemd[1543]: Reached target paths.target. May 17 00:53:41.999940 systemd[1543]: Reached target sockets.target. May 17 00:53:41.999957 systemd[1543]: Reached target timers.target. May 17 00:53:41.999972 systemd[1543]: Reached target basic.target. May 17 00:53:42.000086 systemd[1]: Started user@500.service. May 17 00:53:42.001286 systemd[1]: Started session-1.scope. May 17 00:53:42.002115 systemd[1]: Started session-2.scope. May 17 00:53:42.003038 systemd[1543]: Reached target default.target. May 17 00:53:42.003226 systemd[1543]: Startup finished in 296ms. May 17 00:53:47.389121 waagent[1534]: 2025-05-17T00:53:47.389007Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 May 17 00:53:47.393707 waagent[1534]: 2025-05-17T00:53:47.393617Z INFO Daemon Daemon OS: flatcar 3510.3.7 May 17 00:53:47.396729 waagent[1534]: 2025-05-17T00:53:47.396654Z INFO Daemon Daemon Python: 3.9.16 May 17 00:53:47.399660 waagent[1534]: 2025-05-17T00:53:47.399580Z INFO Daemon Daemon Run daemon May 17 00:53:47.402881 waagent[1534]: 2025-05-17T00:53:47.402813Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.7' May 17 00:53:47.416648 waagent[1534]: 2025-05-17T00:53:47.416525Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:53:47.425703 waagent[1534]: 2025-05-17T00:53:47.425599Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:53:47.431488 waagent[1534]: 2025-05-17T00:53:47.431424Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:53:47.441709 waagent[1534]: 2025-05-17T00:53:47.432862Z INFO Daemon Daemon Using waagent for provisioning May 17 00:53:47.441709 waagent[1534]: 2025-05-17T00:53:47.434319Z INFO Daemon Daemon Activate resource disk May 17 00:53:47.441709 waagent[1534]: 2025-05-17T00:53:47.435432Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 17 00:53:47.444154 waagent[1534]: 2025-05-17T00:53:47.444093Z INFO Daemon Daemon Found device: None May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.445235Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.446045Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.447302Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.448226Z INFO Daemon Daemon Running default provisioning handler May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.457353Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.459839Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.460609Z INFO Daemon Daemon cloud-init is enabled: False May 17 00:53:47.471313 waagent[1534]: 2025-05-17T00:53:47.461393Z INFO Daemon Daemon Copying ovf-env.xml May 17 00:53:47.614394 waagent[1534]: 2025-05-17T00:53:47.610270Z INFO Daemon Daemon Successfully mounted dvd May 17 00:53:47.679603 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 17 00:53:47.694934 waagent[1534]: 2025-05-17T00:53:47.694821Z INFO Daemon Daemon Detect protocol endpoint May 17 00:53:47.697864 waagent[1534]: 2025-05-17T00:53:47.697797Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 17 00:53:47.700814 waagent[1534]: 2025-05-17T00:53:47.700746Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 17 00:53:47.704167 waagent[1534]: 2025-05-17T00:53:47.704105Z INFO Daemon Daemon Test for route to 168.63.129.16 May 17 00:53:47.706950 waagent[1534]: 2025-05-17T00:53:47.706887Z INFO Daemon Daemon Route to 168.63.129.16 exists May 17 00:53:47.709432 waagent[1534]: 2025-05-17T00:53:47.709360Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 17 00:53:47.801837 waagent[1534]: 2025-05-17T00:53:47.801767Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 17 00:53:47.809444 waagent[1534]: 2025-05-17T00:53:47.803631Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 17 00:53:47.809444 waagent[1534]: 2025-05-17T00:53:47.804352Z INFO Daemon Daemon Server preferred version:2015-04-05 May 17 00:53:48.120562 waagent[1534]: 2025-05-17T00:53:48.120336Z INFO Daemon Daemon Initializing goal state during protocol detection May 17 00:53:48.132072 waagent[1534]: 2025-05-17T00:53:48.131992Z INFO Daemon Daemon Forcing an update of the goal state.. May 17 00:53:48.134985 waagent[1534]: 2025-05-17T00:53:48.134918Z INFO Daemon Daemon Fetching goal state [incarnation 1] May 17 00:53:48.255430 waagent[1534]: 2025-05-17T00:53:48.255292Z INFO Daemon Daemon Found private key matching thumbprint 3BD111E6DAEE29B30B66E3FBE9580F014804CCD1 May 17 00:53:48.262333 waagent[1534]: 2025-05-17T00:53:48.256954Z INFO Daemon Daemon Fetch goal state completed May 17 00:53:48.300915 waagent[1534]: 2025-05-17T00:53:48.300829Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: b1bd52b9-71f6-4aeb-bb4c-de74e9c814a0 New eTag: 13815020493468241619] May 17 00:53:48.308178 waagent[1534]: 2025-05-17T00:53:48.302512Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:53:48.314244 waagent[1534]: 2025-05-17T00:53:48.314189Z INFO Daemon Daemon Starting provisioning May 17 00:53:48.320417 waagent[1534]: 2025-05-17T00:53:48.315320Z INFO Daemon Daemon Handle ovf-env.xml. May 17 00:53:48.320417 waagent[1534]: 2025-05-17T00:53:48.316161Z INFO Daemon Daemon Set hostname [ci-3510.3.7-n-34d8c498b2] May 17 00:53:48.331587 waagent[1534]: 2025-05-17T00:53:48.331488Z INFO Daemon Daemon Publish hostname [ci-3510.3.7-n-34d8c498b2] May 17 00:53:48.338886 waagent[1534]: 2025-05-17T00:53:48.332976Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 17 00:53:48.338886 waagent[1534]: 2025-05-17T00:53:48.333871Z INFO Daemon Daemon Primary interface is [eth0] May 17 00:53:48.347118 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. May 17 00:53:48.347386 systemd[1]: Stopped systemd-networkd-wait-online.service. May 17 00:53:48.347460 systemd[1]: Stopping systemd-networkd-wait-online.service... May 17 00:53:48.347746 systemd[1]: Stopping systemd-networkd.service... May 17 00:53:48.353418 systemd-networkd[1196]: eth0: DHCPv6 lease lost May 17 00:53:48.354783 systemd[1]: systemd-networkd.service: Deactivated successfully. May 17 00:53:48.355019 systemd[1]: Stopped systemd-networkd.service. May 17 00:53:48.358077 systemd[1]: Starting systemd-networkd.service... May 17 00:53:48.389773 systemd-networkd[1586]: enP420s1: Link UP May 17 00:53:48.389781 systemd-networkd[1586]: enP420s1: Gained carrier May 17 00:53:48.391110 systemd-networkd[1586]: eth0: Link UP May 17 00:53:48.391118 systemd-networkd[1586]: eth0: Gained carrier May 17 00:53:48.391581 systemd-networkd[1586]: lo: Link UP May 17 00:53:48.391589 systemd-networkd[1586]: lo: Gained carrier May 17 00:53:48.391898 systemd-networkd[1586]: eth0: Gained IPv6LL May 17 00:53:48.392472 systemd-networkd[1586]: Enumeration completed May 17 00:53:48.392567 systemd[1]: Started systemd-networkd.service. May 17 00:53:48.394659 systemd[1]: Starting systemd-networkd-wait-online.service... May 17 00:53:48.397401 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 17 00:53:48.402463 waagent[1534]: 2025-05-17T00:53:48.398625Z INFO Daemon Daemon Create user account if not exists May 17 00:53:48.402463 waagent[1534]: 2025-05-17T00:53:48.400202Z INFO Daemon Daemon User core already exists, skip useradd May 17 00:53:48.402463 waagent[1534]: 2025-05-17T00:53:48.400959Z INFO Daemon Daemon Configure sudoer May 17 00:53:48.403030 waagent[1534]: 2025-05-17T00:53:48.402959Z INFO Daemon Daemon Configure sshd May 17 00:53:48.404090 waagent[1534]: 2025-05-17T00:53:48.404036Z INFO Daemon Daemon Deploy ssh public key. May 17 00:53:48.441466 systemd-networkd[1586]: eth0: DHCPv4 address 10.200.4.13/24, gateway 10.200.4.1 acquired from 168.63.129.16 May 17 00:53:48.446887 systemd[1]: Finished systemd-networkd-wait-online.service. May 17 00:53:49.532910 waagent[1534]: 2025-05-17T00:53:49.532820Z INFO Daemon Daemon Provisioning complete May 17 00:53:49.546917 waagent[1534]: 2025-05-17T00:53:49.546840Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 17 00:53:49.553536 waagent[1534]: 2025-05-17T00:53:49.548169Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 17 00:53:49.553536 waagent[1534]: 2025-05-17T00:53:49.549760Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent May 17 00:53:49.812661 waagent[1592]: 2025-05-17T00:53:49.812498Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent May 17 00:53:49.813352 waagent[1592]: 2025-05-17T00:53:49.813287Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:49.813517 waagent[1592]: 2025-05-17T00:53:49.813459Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:49.824553 waagent[1592]: 2025-05-17T00:53:49.824479Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. May 17 00:53:49.824707 waagent[1592]: 2025-05-17T00:53:49.824653Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] May 17 00:53:49.873167 waagent[1592]: 2025-05-17T00:53:49.873062Z INFO ExtHandler ExtHandler Found private key matching thumbprint 3BD111E6DAEE29B30B66E3FBE9580F014804CCD1 May 17 00:53:49.873459 waagent[1592]: 2025-05-17T00:53:49.873400Z INFO ExtHandler ExtHandler Fetch goal state completed May 17 00:53:49.885861 waagent[1592]: 2025-05-17T00:53:49.885802Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 48dc0af8-eeda-4f37-b430-326ff8548296 New eTag: 13815020493468241619] May 17 00:53:49.886353 waagent[1592]: 2025-05-17T00:53:49.886293Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob May 17 00:53:49.988435 waagent[1592]: 2025-05-17T00:53:49.988268Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:53:50.000657 waagent[1592]: 2025-05-17T00:53:50.000581Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1592 May 17 00:53:50.003934 waagent[1592]: 2025-05-17T00:53:50.003867Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:53:50.005087 waagent[1592]: 2025-05-17T00:53:50.005028Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 17 00:53:50.119442 waagent[1592]: 2025-05-17T00:53:50.119291Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:53:50.119856 waagent[1592]: 2025-05-17T00:53:50.119790Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:53:50.127808 waagent[1592]: 2025-05-17T00:53:50.127751Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:53:50.128268 waagent[1592]: 2025-05-17T00:53:50.128205Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:53:50.129309 waagent[1592]: 2025-05-17T00:53:50.129241Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] May 17 00:53:50.130618 waagent[1592]: 2025-05-17T00:53:50.130558Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:53:50.131195 waagent[1592]: 2025-05-17T00:53:50.131140Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:50.131626 waagent[1592]: 2025-05-17T00:53:50.131569Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:53:50.132062 waagent[1592]: 2025-05-17T00:53:50.132008Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:50.132187 waagent[1592]: 2025-05-17T00:53:50.132113Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:50.132782 waagent[1592]: 2025-05-17T00:53:50.132722Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:53:50.133331 waagent[1592]: 2025-05-17T00:53:50.133273Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:53:50.133470 waagent[1592]: 2025-05-17T00:53:50.133415Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:50.134132 waagent[1592]: 2025-05-17T00:53:50.134078Z INFO EnvHandler ExtHandler Configure routes May 17 00:53:50.134408 waagent[1592]: 2025-05-17T00:53:50.134337Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:53:50.134623 waagent[1592]: 2025-05-17T00:53:50.134570Z INFO EnvHandler ExtHandler Gateway:None May 17 00:53:50.135037 waagent[1592]: 2025-05-17T00:53:50.134978Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:53:50.135037 waagent[1592]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:53:50.135037 waagent[1592]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:53:50.135037 waagent[1592]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:53:50.135037 waagent[1592]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:50.135037 waagent[1592]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:50.135037 waagent[1592]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:50.135457 waagent[1592]: 2025-05-17T00:53:50.135394Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:53:50.135598 waagent[1592]: 2025-05-17T00:53:50.135542Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:53:50.136236 waagent[1592]: 2025-05-17T00:53:50.136170Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:53:50.137248 waagent[1592]: 2025-05-17T00:53:50.137194Z INFO EnvHandler ExtHandler Routes:None May 17 00:53:50.150510 waagent[1592]: 2025-05-17T00:53:50.150454Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) May 17 00:53:50.151274 waagent[1592]: 2025-05-17T00:53:50.151222Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:53:50.153644 waagent[1592]: 2025-05-17T00:53:50.153589Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' May 17 00:53:50.195006 waagent[1592]: 2025-05-17T00:53:50.194933Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. May 17 00:53:50.201901 waagent[1592]: 2025-05-17T00:53:50.201835Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1586' May 17 00:53:50.273480 waagent[1592]: 2025-05-17T00:53:50.272118Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:53:50.273480 waagent[1592]: Executing ['ip', '-a', '-o', 'link']: May 17 00:53:50.273480 waagent[1592]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:53:50.273480 waagent[1592]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1e:4c:ae brd ff:ff:ff:ff:ff:ff May 17 00:53:50.273480 waagent[1592]: 3: enP420s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1e:4c:ae brd ff:ff:ff:ff:ff:ff\ altname enP420p0s2 May 17 00:53:50.273480 waagent[1592]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:53:50.273480 waagent[1592]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:53:50.273480 waagent[1592]: 2: eth0 inet 10.200.4.13/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:53:50.273480 waagent[1592]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:53:50.273480 waagent[1592]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:53:50.273480 waagent[1592]: 2: eth0 inet6 fe80::7e1e:52ff:fe1e:4cae/64 scope link \ valid_lft forever preferred_lft forever May 17 00:53:50.460891 waagent[1592]: 2025-05-17T00:53:50.460823Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.13.1.1 -- exiting May 17 00:53:50.553663 waagent[1534]: 2025-05-17T00:53:50.553510Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running May 17 00:53:50.558388 waagent[1534]: 2025-05-17T00:53:50.558330Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.13.1.1 to be the latest agent May 17 00:53:51.643476 waagent[1626]: 2025-05-17T00:53:51.643356Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.13.1.1) May 17 00:53:51.644696 waagent[1626]: 2025-05-17T00:53:51.644630Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.7 May 17 00:53:51.644838 waagent[1626]: 2025-05-17T00:53:51.644783Z INFO ExtHandler ExtHandler Python: 3.9.16 May 17 00:53:51.644982 waagent[1626]: 2025-05-17T00:53:51.644935Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 17 00:53:51.660055 waagent[1626]: 2025-05-17T00:53:51.659962Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.7; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; Arch: x86_64; systemd: True; systemd_version: systemd 252 (252); LISDrivers: Absent; logrotate: logrotate 3.20.1; May 17 00:53:51.660449 waagent[1626]: 2025-05-17T00:53:51.660394Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:51.660613 waagent[1626]: 2025-05-17T00:53:51.660565Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:51.660832 waagent[1626]: 2025-05-17T00:53:51.660783Z INFO ExtHandler ExtHandler Initializing the goal state... May 17 00:53:51.670935 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 17 00:53:51.674694 waagent[1626]: 2025-05-17T00:53:51.674002Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 17 00:53:51.671214 systemd[1]: Stopped kubelet.service. May 17 00:53:51.671267 systemd[1]: kubelet.service: Consumed 1.129s CPU time. May 17 00:53:51.672971 systemd[1]: Starting kubelet.service... May 17 00:53:51.683896 waagent[1626]: 2025-05-17T00:53:51.683832Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.166 May 17 00:53:51.685120 waagent[1626]: 2025-05-17T00:53:51.685062Z INFO ExtHandler May 17 00:53:51.685389 waagent[1626]: 2025-05-17T00:53:51.685317Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 0dbebba2-16de-4e46-ad01-b3ea23a7f68c eTag: 13815020493468241619 source: Fabric] May 17 00:53:51.686417 waagent[1626]: 2025-05-17T00:53:51.686335Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 17 00:53:51.688018 waagent[1626]: 2025-05-17T00:53:51.687951Z INFO ExtHandler May 17 00:53:51.688286 waagent[1626]: 2025-05-17T00:53:51.688227Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 17 00:53:51.696559 waagent[1626]: 2025-05-17T00:53:51.696497Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 17 00:53:51.697653 waagent[1626]: 2025-05-17T00:53:51.697585Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required May 17 00:53:51.718381 waagent[1626]: 2025-05-17T00:53:51.718304Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. May 17 00:53:51.795361 systemd[1]: Started kubelet.service. May 17 00:53:51.802642 waagent[1626]: 2025-05-17T00:53:51.802510Z INFO ExtHandler Downloaded certificate {'thumbprint': '3BD111E6DAEE29B30B66E3FBE9580F014804CCD1', 'hasPrivateKey': True} May 17 00:53:51.804491 waagent[1626]: 2025-05-17T00:53:51.804420Z INFO ExtHandler Fetch goal state from WireServer completed May 17 00:53:51.805678 waagent[1626]: 2025-05-17T00:53:51.805614Z INFO ExtHandler ExtHandler Goal state initialization completed. May 17 00:53:51.824344 waagent[1626]: 2025-05-17T00:53:51.824261Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.0.15 3 Sep 2024 (Library: OpenSSL 3.0.15 3 Sep 2024) May 17 00:53:51.832483 waagent[1626]: 2025-05-17T00:53:51.832395Z INFO ExtHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:53:51.836097 waagent[1626]: 2025-05-17T00:53:51.836005Z INFO ExtHandler ExtHandler Did not find a legacy firewall rule: ['iptables', '-w', '-t', 'security', '-C', 'OUTPUT', '-d', '168.63.129.16', '-p', 'tcp', '-m', 'conntrack', '--ctstate', 'INVALID,NEW', '-j', 'ACCEPT'] May 17 00:53:51.836293 waagent[1626]: 2025-05-17T00:53:51.836241Z INFO ExtHandler ExtHandler Checking state of the firewall May 17 00:53:52.437354 waagent[1626]: 2025-05-17T00:53:52.437229Z WARNING ExtHandler ExtHandler The firewall rules for Azure Fabric are not setup correctly (the environment thread will fix it): The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Current state: May 17 00:53:52.437354 waagent[1626]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.437354 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.437354 waagent[1626]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.437354 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.437354 waagent[1626]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.437354 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.437354 waagent[1626]: 54 7806 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:53:52.438517 waagent[1626]: 2025-05-17T00:53:52.438446Z INFO ExtHandler ExtHandler Setting up persistent firewall rules May 17 00:53:52.441075 waagent[1626]: 2025-05-17T00:53:52.440976Z INFO ExtHandler ExtHandler The firewalld service is not present on the system May 17 00:53:52.441315 waagent[1626]: 2025-05-17T00:53:52.441262Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 17 00:53:52.480567 waagent[1626]: 2025-05-17T00:53:52.480440Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 17 00:53:52.494693 waagent[1626]: 2025-05-17T00:53:52.494619Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 17 00:53:52.496161 waagent[1626]: 2025-05-17T00:53:52.495173Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' May 17 00:53:52.506935 waagent[1626]: 2025-05-17T00:53:52.506864Z INFO ExtHandler ExtHandler WALinuxAgent-2.13.1.1 running as process 1626 May 17 00:53:52.511250 waagent[1626]: 2025-05-17T00:53:52.511178Z INFO ExtHandler ExtHandler [CGI] Cgroups is not currently supported on ['flatcar', '3510.3.7', '', 'Flatcar Container Linux by Kinvolk'] May 17 00:53:52.512255 waagent[1626]: 2025-05-17T00:53:52.512150Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case cgroup usage went from enabled to disabled May 17 00:53:52.512522 kubelet[1640]: E0517 00:53:52.512490 1640 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:53:52.513411 waagent[1626]: 2025-05-17T00:53:52.513329Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 17 00:53:52.516534 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:53:52.516687 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:53:52.517501 waagent[1626]: 2025-05-17T00:53:52.517434Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 17 00:53:52.518746 waagent[1626]: 2025-05-17T00:53:52.518685Z INFO ExtHandler ExtHandler Starting env monitor service. May 17 00:53:52.519137 waagent[1626]: 2025-05-17T00:53:52.519081Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:52.519290 waagent[1626]: 2025-05-17T00:53:52.519242Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:52.519865 waagent[1626]: 2025-05-17T00:53:52.519805Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 17 00:53:52.520145 waagent[1626]: 2025-05-17T00:53:52.520090Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 17 00:53:52.520145 waagent[1626]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 17 00:53:52.520145 waagent[1626]: eth0 00000000 0104C80A 0003 0 0 1024 00000000 0 0 0 May 17 00:53:52.520145 waagent[1626]: eth0 0004C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 17 00:53:52.520145 waagent[1626]: eth0 0104C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:52.520145 waagent[1626]: eth0 10813FA8 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:52.520145 waagent[1626]: eth0 FEA9FEA9 0104C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 17 00:53:52.522513 waagent[1626]: 2025-05-17T00:53:52.522402Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 17 00:53:52.523526 waagent[1626]: 2025-05-17T00:53:52.523400Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 17 00:53:52.525597 waagent[1626]: 2025-05-17T00:53:52.525480Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 17 00:53:52.526324 waagent[1626]: 2025-05-17T00:53:52.526248Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 17 00:53:52.527066 waagent[1626]: 2025-05-17T00:53:52.527006Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 17 00:53:52.527891 waagent[1626]: 2025-05-17T00:53:52.527828Z INFO EnvHandler ExtHandler Configure routes May 17 00:53:52.528033 waagent[1626]: 2025-05-17T00:53:52.527984Z INFO EnvHandler ExtHandler Gateway:None May 17 00:53:52.528166 waagent[1626]: 2025-05-17T00:53:52.528120Z INFO EnvHandler ExtHandler Routes:None May 17 00:53:52.528653 waagent[1626]: 2025-05-17T00:53:52.528596Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 17 00:53:52.528762 waagent[1626]: 2025-05-17T00:53:52.528705Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 17 00:53:52.531701 waagent[1626]: 2025-05-17T00:53:52.531416Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 17 00:53:52.533830 waagent[1626]: 2025-05-17T00:53:52.533771Z INFO MonitorHandler ExtHandler Network interfaces: May 17 00:53:52.533830 waagent[1626]: Executing ['ip', '-a', '-o', 'link']: May 17 00:53:52.533830 waagent[1626]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 17 00:53:52.533830 waagent[1626]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1e:4c:ae brd ff:ff:ff:ff:ff:ff May 17 00:53:52.533830 waagent[1626]: 3: enP420s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:1e:52:1e:4c:ae brd ff:ff:ff:ff:ff:ff\ altname enP420p0s2 May 17 00:53:52.533830 waagent[1626]: Executing ['ip', '-4', '-a', '-o', 'address']: May 17 00:53:52.533830 waagent[1626]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 17 00:53:52.533830 waagent[1626]: 2: eth0 inet 10.200.4.13/24 metric 1024 brd 10.200.4.255 scope global eth0\ valid_lft forever preferred_lft forever May 17 00:53:52.533830 waagent[1626]: Executing ['ip', '-6', '-a', '-o', 'address']: May 17 00:53:52.533830 waagent[1626]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever May 17 00:53:52.533830 waagent[1626]: 2: eth0 inet6 fe80::7e1e:52ff:fe1e:4cae/64 scope link \ valid_lft forever preferred_lft forever May 17 00:53:52.555907 waagent[1626]: 2025-05-17T00:53:52.555839Z INFO ExtHandler ExtHandler Downloading agent manifest May 17 00:53:52.577630 waagent[1626]: 2025-05-17T00:53:52.577566Z INFO ExtHandler ExtHandler May 17 00:53:52.578077 waagent[1626]: 2025-05-17T00:53:52.578001Z INFO EnvHandler ExtHandler Using iptables [version 1.8.8] to manage firewall rules May 17 00:53:52.580593 waagent[1626]: 2025-05-17T00:53:52.580542Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 003eaa64-3ce1-4a64-807b-685b01754748 correlation 24b72944-59f3-4594-a47f-86431dbbacc1 created: 2025-05-17T00:52:27.313271Z] May 17 00:53:52.584430 waagent[1626]: 2025-05-17T00:53:52.584335Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 17 00:53:52.586378 waagent[1626]: 2025-05-17T00:53:52.586310Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 8 ms] May 17 00:53:52.607734 waagent[1626]: 2025-05-17T00:53:52.607670Z WARNING EnvHandler ExtHandler The firewall is not configured correctly. The following rules are missing: ['ACCEPT DNS', 'DROP'] due to: ['iptables: Bad rule (does a matching rule exist in that chain?).\n', 'iptables: Bad rule (does a matching rule exist in that chain?).\n']. Will reset it. Current state: May 17 00:53:52.607734 waagent[1626]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.607734 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.607734 waagent[1626]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.607734 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.607734 waagent[1626]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.607734 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.607734 waagent[1626]: 100 16162 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:53:52.615989 waagent[1626]: 2025-05-17T00:53:52.615913Z INFO ExtHandler ExtHandler Looking for existing remote access users. May 17 00:53:52.620699 waagent[1626]: 2025-05-17T00:53:52.620640Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.13.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 6305B5EB-075A-4D33-83F8-8768906FE934;UpdateGSErrors: 0;AutoUpdate: 1;UpdateMode: SelfUpdate;] May 17 00:53:52.648953 waagent[1626]: 2025-05-17T00:53:52.648856Z INFO EnvHandler ExtHandler The firewall was setup successfully: May 17 00:53:52.648953 waagent[1626]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.648953 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.648953 waagent[1626]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.648953 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.648953 waagent[1626]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 17 00:53:52.648953 waagent[1626]: pkts bytes target prot opt in out source destination May 17 00:53:52.648953 waagent[1626]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 17 00:53:52.648953 waagent[1626]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 17 00:53:52.648953 waagent[1626]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 17 00:53:52.650222 waagent[1626]: 2025-05-17T00:53:52.650168Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 17 00:54:02.670894 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 17 00:54:02.671216 systemd[1]: Stopped kubelet.service. May 17 00:54:02.673194 systemd[1]: Starting kubelet.service... May 17 00:54:02.767194 systemd[1]: Started kubelet.service. May 17 00:54:03.471209 kubelet[1684]: E0517 00:54:03.471147 1684 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:54:03.473193 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:54:03.473354 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:54:13.670882 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 17 00:54:13.671211 systemd[1]: Stopped kubelet.service. May 17 00:54:13.673255 systemd[1]: Starting kubelet.service... May 17 00:54:13.768038 systemd[1]: Started kubelet.service. May 17 00:54:14.386918 systemd[1]: Created slice system-sshd.slice. May 17 00:54:14.388487 systemd[1]: Started sshd@0-10.200.4.13:22-10.200.16.10:49858.service. May 17 00:54:14.475491 kubelet[1693]: E0517 00:54:14.475441 1693 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:54:14.477097 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:54:14.477268 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:54:15.254958 sshd[1699]: Accepted publickey for core from 10.200.16.10 port 49858 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:54:15.256410 sshd[1699]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:15.260689 systemd-logind[1424]: New session 3 of user core. May 17 00:54:15.261241 systemd[1]: Started session-3.scope. May 17 00:54:15.801820 systemd[1]: Started sshd@1-10.200.4.13:22-10.200.16.10:49872.service. May 17 00:54:16.402254 sshd[1704]: Accepted publickey for core from 10.200.16.10 port 49872 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:54:16.404046 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:16.409845 systemd[1]: Started session-4.scope. May 17 00:54:16.410677 systemd-logind[1424]: New session 4 of user core. May 17 00:54:16.838161 sshd[1704]: pam_unix(sshd:session): session closed for user core May 17 00:54:16.841354 systemd[1]: sshd@1-10.200.4.13:22-10.200.16.10:49872.service: Deactivated successfully. May 17 00:54:16.842174 systemd[1]: session-4.scope: Deactivated successfully. May 17 00:54:16.842808 systemd-logind[1424]: Session 4 logged out. Waiting for processes to exit. May 17 00:54:16.843581 systemd-logind[1424]: Removed session 4. May 17 00:54:16.938859 systemd[1]: Started sshd@2-10.200.4.13:22-10.200.16.10:49876.service. May 17 00:54:17.540317 sshd[1710]: Accepted publickey for core from 10.200.16.10 port 49876 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:54:17.541764 sshd[1710]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:17.547442 systemd-logind[1424]: New session 5 of user core. May 17 00:54:17.547564 systemd[1]: Started session-5.scope. May 17 00:54:17.971822 sshd[1710]: pam_unix(sshd:session): session closed for user core May 17 00:54:17.974966 systemd[1]: sshd@2-10.200.4.13:22-10.200.16.10:49876.service: Deactivated successfully. May 17 00:54:17.975910 systemd[1]: session-5.scope: Deactivated successfully. May 17 00:54:17.976804 systemd-logind[1424]: Session 5 logged out. Waiting for processes to exit. May 17 00:54:17.977720 systemd-logind[1424]: Removed session 5. May 17 00:54:18.074441 systemd[1]: Started sshd@3-10.200.4.13:22-10.200.16.10:49892.service. May 17 00:54:18.677381 sshd[1716]: Accepted publickey for core from 10.200.16.10 port 49892 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:54:18.679017 sshd[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:18.684769 systemd[1]: Started session-6.scope. May 17 00:54:18.685323 systemd-logind[1424]: New session 6 of user core. May 17 00:54:19.140294 sshd[1716]: pam_unix(sshd:session): session closed for user core May 17 00:54:19.143778 systemd[1]: sshd@3-10.200.4.13:22-10.200.16.10:49892.service: Deactivated successfully. May 17 00:54:19.144588 systemd[1]: session-6.scope: Deactivated successfully. May 17 00:54:19.145224 systemd-logind[1424]: Session 6 logged out. Waiting for processes to exit. May 17 00:54:19.145984 systemd-logind[1424]: Removed session 6. May 17 00:54:19.189844 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 17 00:54:19.241227 systemd[1]: Started sshd@4-10.200.4.13:22-10.200.16.10:39966.service. May 17 00:54:19.843759 sshd[1722]: Accepted publickey for core from 10.200.16.10 port 39966 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:54:19.845431 sshd[1722]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:54:19.850162 systemd[1]: Started session-7.scope. May 17 00:54:19.850876 systemd-logind[1424]: New session 7 of user core. May 17 00:54:20.645354 sudo[1725]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 17 00:54:20.645737 sudo[1725]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) May 17 00:54:20.678941 systemd[1]: Starting docker.service... May 17 00:54:20.714969 env[1735]: time="2025-05-17T00:54:20.714921677Z" level=info msg="Starting up" May 17 00:54:20.716104 env[1735]: time="2025-05-17T00:54:20.716080282Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:54:20.716211 env[1735]: time="2025-05-17T00:54:20.716200483Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:54:20.716266 env[1735]: time="2025-05-17T00:54:20.716254983Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:54:20.716307 env[1735]: time="2025-05-17T00:54:20.716299683Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:54:20.718201 env[1735]: time="2025-05-17T00:54:20.718173193Z" level=info msg="parsed scheme: \"unix\"" module=grpc May 17 00:54:20.718201 env[1735]: time="2025-05-17T00:54:20.718190693Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc May 17 00:54:20.718343 env[1735]: time="2025-05-17T00:54:20.718208193Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc May 17 00:54:20.718343 env[1735]: time="2025-05-17T00:54:20.718219593Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc May 17 00:54:20.724135 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport3078552816-merged.mount: Deactivated successfully. May 17 00:54:20.804435 env[1735]: time="2025-05-17T00:54:20.804395417Z" level=info msg="Loading containers: start." May 17 00:54:20.982390 kernel: Initializing XFRM netlink socket May 17 00:54:21.034905 env[1735]: time="2025-05-17T00:54:21.034860242Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" May 17 00:54:21.163500 systemd-networkd[1586]: docker0: Link UP May 17 00:54:21.187646 env[1735]: time="2025-05-17T00:54:21.187609347Z" level=info msg="Loading containers: done." May 17 00:54:21.199619 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2088793140-merged.mount: Deactivated successfully. May 17 00:54:21.211182 env[1735]: time="2025-05-17T00:54:21.211150655Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 17 00:54:21.211385 env[1735]: time="2025-05-17T00:54:21.211344856Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 May 17 00:54:21.211488 env[1735]: time="2025-05-17T00:54:21.211468357Z" level=info msg="Daemon has completed initialization" May 17 00:54:21.296091 systemd[1]: Started docker.service. May 17 00:54:21.305934 env[1735]: time="2025-05-17T00:54:21.305881993Z" level=info msg="API listen on /run/docker.sock" May 17 00:54:24.562646 env[1438]: time="2025-05-17T00:54:24.562590026Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\"" May 17 00:54:24.671059 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 17 00:54:24.671402 systemd[1]: Stopped kubelet.service. May 17 00:54:24.673398 systemd[1]: Starting kubelet.service... May 17 00:54:24.768612 systemd[1]: Started kubelet.service. May 17 00:54:25.364632 update_engine[1425]: I0517 00:54:25.364532 1425 update_attempter.cc:509] Updating boot flags... May 17 00:54:25.446273 kubelet[1853]: E0517 00:54:25.446171 1853 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:54:25.447895 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:54:25.448003 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:54:26.013957 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021044995.mount: Deactivated successfully. May 17 00:54:27.848218 env[1438]: time="2025-05-17T00:54:27.848093359Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:27.853452 env[1438]: time="2025-05-17T00:54:27.853408676Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:27.858418 env[1438]: time="2025-05-17T00:54:27.858385691Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:27.866945 env[1438]: time="2025-05-17T00:54:27.866854118Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:5b68f0df22013422dc8fb9ddfcff513eb6fc92f9dbf8aae41555c895efef5a20,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:27.867809 env[1438]: time="2025-05-17T00:54:27.867780621Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.9\" returns image reference \"sha256:0c19e0eafbdfffa1317cf99a16478265a4cd746ef677de27b0be6a8b515f36b1\"" May 17 00:54:27.868428 env[1438]: time="2025-05-17T00:54:27.868402623Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\"" May 17 00:54:29.809178 env[1438]: time="2025-05-17T00:54:29.809060805Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:29.819507 env[1438]: time="2025-05-17T00:54:29.819465734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:29.826034 env[1438]: time="2025-05-17T00:54:29.825954352Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:29.831333 env[1438]: time="2025-05-17T00:54:29.831248866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:be9e7987d323b38a12e28436cff6d6ec6fc31ffdd3ea11eaa9d74852e9d31248,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:29.832418 env[1438]: time="2025-05-17T00:54:29.832386569Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.9\" returns image reference \"sha256:6aa3d581404ae6ae5dc355cb750aaedec843d2c99263d28fce50277e8e2a6ec2\"" May 17 00:54:29.833141 env[1438]: time="2025-05-17T00:54:29.833116571Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\"" May 17 00:54:31.377536 env[1438]: time="2025-05-17T00:54:31.377478428Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:31.385453 env[1438]: time="2025-05-17T00:54:31.385415847Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:31.390914 env[1438]: time="2025-05-17T00:54:31.390830760Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:31.395591 env[1438]: time="2025-05-17T00:54:31.395560972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:eb358c7346bb17ab2c639c3ff8ab76a147dec7ae609f5c0c2800233e42253ed1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:31.396179 env[1438]: time="2025-05-17T00:54:31.396147573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.9\" returns image reference \"sha256:737ed3eafaf27a28ea9e13b736011bfed5bd349785ac6bc220b34eaf4adc51e3\"" May 17 00:54:31.396872 env[1438]: time="2025-05-17T00:54:31.396846175Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\"" May 17 00:54:32.680435 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount987786518.mount: Deactivated successfully. May 17 00:54:33.314347 env[1438]: time="2025-05-17T00:54:33.314288374Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:33.319101 env[1438]: time="2025-05-17T00:54:33.319059184Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:33.323820 env[1438]: time="2025-05-17T00:54:33.323783294Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.31.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:33.327150 env[1438]: time="2025-05-17T00:54:33.327114901Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:fdf026cf2434537e499e9c739d189ca8fc57101d929ac5ccd8e24f979a9738c1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:33.327571 env[1438]: time="2025-05-17T00:54:33.327541102Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.9\" returns image reference \"sha256:11a47a71ed3ecf643e15a11990daed3b656279449ba9344db0b54652c4723578\"" May 17 00:54:33.328207 env[1438]: time="2025-05-17T00:54:33.328182103Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" May 17 00:54:33.888558 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2523236614.mount: Deactivated successfully. May 17 00:54:35.258675 env[1438]: time="2025-05-17T00:54:35.258620011Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:35.268252 env[1438]: time="2025-05-17T00:54:35.268168929Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:35.272990 env[1438]: time="2025-05-17T00:54:35.272953938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.11.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:35.278148 env[1438]: time="2025-05-17T00:54:35.278110648Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:35.278880 env[1438]: time="2025-05-17T00:54:35.278846849Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" May 17 00:54:35.279922 env[1438]: time="2025-05-17T00:54:35.279888651Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" May 17 00:54:35.670995 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 17 00:54:35.671295 systemd[1]: Stopped kubelet.service. May 17 00:54:35.673226 systemd[1]: Starting kubelet.service... May 17 00:54:35.768331 systemd[1]: Started kubelet.service. May 17 00:54:36.439425 kubelet[1932]: E0517 00:54:36.439361 1932 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 17 00:54:36.441079 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 17 00:54:36.441237 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 17 00:54:36.575358 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206854924.mount: Deactivated successfully. May 17 00:54:36.600982 env[1438]: time="2025-05-17T00:54:36.600874753Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:36.610145 env[1438]: time="2025-05-17T00:54:36.610114669Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:36.616043 env[1438]: time="2025-05-17T00:54:36.616014979Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:36.619992 env[1438]: time="2025-05-17T00:54:36.619963086Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:36.620415 env[1438]: time="2025-05-17T00:54:36.620361487Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" May 17 00:54:36.621166 env[1438]: time="2025-05-17T00:54:36.621138788Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" May 17 00:54:37.298959 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3301995833.mount: Deactivated successfully. May 17 00:54:39.824467 env[1438]: time="2025-05-17T00:54:39.824404352Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:39.833320 env[1438]: time="2025-05-17T00:54:39.833224603Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:39.840734 env[1438]: time="2025-05-17T00:54:39.840694815Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.15-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:39.844026 env[1438]: time="2025-05-17T00:54:39.843938707Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:39.845345 env[1438]: time="2025-05-17T00:54:39.845309746Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" May 17 00:54:42.380742 systemd[1]: Stopped kubelet.service. May 17 00:54:42.383231 systemd[1]: Starting kubelet.service... May 17 00:54:42.416461 systemd[1]: Reloading. May 17 00:54:42.516179 /usr/lib/systemd/system-generators/torcx-generator[1981]: time="2025-05-17T00:54:42Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:54:42.516213 /usr/lib/systemd/system-generators/torcx-generator[1981]: time="2025-05-17T00:54:42Z" level=info msg="torcx already run" May 17 00:54:42.612568 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:54:42.612805 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:54:42.637686 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:54:42.741637 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 17 00:54:42.741724 systemd[1]: kubelet.service: Failed with result 'signal'. May 17 00:54:42.741990 systemd[1]: Stopped kubelet.service. May 17 00:54:42.743841 systemd[1]: Starting kubelet.service... May 17 00:54:43.093214 systemd[1]: Started kubelet.service. May 17 00:54:43.911439 kubelet[2048]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:54:43.911439 kubelet[2048]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:54:43.911439 kubelet[2048]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:54:43.911931 kubelet[2048]: I0517 00:54:43.911517 2048 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:54:44.309991 kubelet[2048]: I0517 00:54:44.309947 2048 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:54:44.309991 kubelet[2048]: I0517 00:54:44.309977 2048 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:54:44.310308 kubelet[2048]: I0517 00:54:44.310287 2048 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:54:44.338670 kubelet[2048]: E0517 00:54:44.338638 2048 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:44.339717 kubelet[2048]: I0517 00:54:44.339687 2048 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:54:44.348159 kubelet[2048]: E0517 00:54:44.348108 2048 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:54:44.348159 kubelet[2048]: I0517 00:54:44.348155 2048 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:54:44.352693 kubelet[2048]: I0517 00:54:44.352671 2048 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:54:44.352802 kubelet[2048]: I0517 00:54:44.352783 2048 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:54:44.352943 kubelet[2048]: I0517 00:54:44.352911 2048 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:54:44.353127 kubelet[2048]: I0517 00:54:44.352940 2048 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-34d8c498b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:54:44.353273 kubelet[2048]: I0517 00:54:44.353138 2048 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:54:44.353273 kubelet[2048]: I0517 00:54:44.353152 2048 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:54:44.353273 kubelet[2048]: I0517 00:54:44.353269 2048 state_mem.go:36] "Initialized new in-memory state store" May 17 00:54:44.364795 kubelet[2048]: I0517 00:54:44.364741 2048 kubelet.go:408] "Attempting to sync node with API server" May 17 00:54:44.364994 kubelet[2048]: I0517 00:54:44.364821 2048 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:54:44.364994 kubelet[2048]: I0517 00:54:44.364923 2048 kubelet.go:314] "Adding apiserver pod source" May 17 00:54:44.364994 kubelet[2048]: I0517 00:54:44.364975 2048 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:54:44.369623 kubelet[2048]: W0517 00:54:44.369481 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:44.369623 kubelet[2048]: E0517 00:54:44.369566 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:44.369770 kubelet[2048]: W0517 00:54:44.369653 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:44.369770 kubelet[2048]: E0517 00:54:44.369700 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:44.369770 kubelet[2048]: I0517 00:54:44.369767 2048 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:54:44.370221 kubelet[2048]: I0517 00:54:44.370196 2048 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:54:44.370297 kubelet[2048]: W0517 00:54:44.370256 2048 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 17 00:54:44.383150 kubelet[2048]: I0517 00:54:44.383126 2048 server.go:1274] "Started kubelet" May 17 00:54:44.385562 kubelet[2048]: I0517 00:54:44.385524 2048 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:54:44.386654 kubelet[2048]: I0517 00:54:44.386635 2048 server.go:449] "Adding debug handlers to kubelet server" May 17 00:54:44.395239 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). May 17 00:54:44.395997 kubelet[2048]: I0517 00:54:44.395389 2048 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:54:44.396246 kubelet[2048]: I0517 00:54:44.396195 2048 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:54:44.396540 kubelet[2048]: I0517 00:54:44.396524 2048 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:54:44.401547 kubelet[2048]: I0517 00:54:44.401522 2048 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:54:44.401778 kubelet[2048]: E0517 00:54:44.399663 2048 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.4.13:6443/api/v1/namespaces/default/events\": dial tcp 10.200.4.13:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3510.3.7-n-34d8c498b2.18402a68a6cbf7f0 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3510.3.7-n-34d8c498b2,UID:ci-3510.3.7-n-34d8c498b2,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3510.3.7-n-34d8c498b2,},FirstTimestamp:2025-05-17 00:54:44.38310296 +0000 UTC m=+1.285248799,LastTimestamp:2025-05-17 00:54:44.38310296 +0000 UTC m=+1.285248799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3510.3.7-n-34d8c498b2,}" May 17 00:54:44.403711 kubelet[2048]: I0517 00:54:44.403357 2048 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:54:44.403711 kubelet[2048]: E0517 00:54:44.403604 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:44.404036 kubelet[2048]: E0517 00:54:44.404009 2048 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-34d8c498b2?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="200ms" May 17 00:54:44.404335 kubelet[2048]: I0517 00:54:44.404320 2048 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:54:44.405709 kubelet[2048]: W0517 00:54:44.405666 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:44.405837 kubelet[2048]: E0517 00:54:44.405815 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:44.406146 kubelet[2048]: I0517 00:54:44.406128 2048 factory.go:221] Registration of the systemd container factory successfully May 17 00:54:44.406315 kubelet[2048]: I0517 00:54:44.406297 2048 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:54:44.407010 kubelet[2048]: I0517 00:54:44.406996 2048 reconciler.go:26] "Reconciler: start to sync state" May 17 00:54:44.408111 kubelet[2048]: E0517 00:54:44.408094 2048 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:54:44.408352 kubelet[2048]: I0517 00:54:44.408336 2048 factory.go:221] Registration of the containerd container factory successfully May 17 00:54:44.456215 kubelet[2048]: I0517 00:54:44.456167 2048 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:54:44.458616 kubelet[2048]: I0517 00:54:44.458591 2048 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:54:44.458748 kubelet[2048]: I0517 00:54:44.458736 2048 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:54:44.458852 kubelet[2048]: I0517 00:54:44.458842 2048 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:54:44.459439 kubelet[2048]: E0517 00:54:44.459415 2048 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:54:44.460973 kubelet[2048]: W0517 00:54:44.460927 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:44.461124 kubelet[2048]: E0517 00:54:44.461106 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:44.504200 kubelet[2048]: E0517 00:54:44.504174 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:44.559653 kubelet[2048]: E0517 00:54:44.559605 2048 kubelet.go:2345] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 17 00:54:44.574738 kubelet[2048]: I0517 00:54:44.574650 2048 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:54:44.574876 kubelet[2048]: I0517 00:54:44.574860 2048 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:54:44.575140 kubelet[2048]: I0517 00:54:44.575114 2048 state_mem.go:36] "Initialized new in-memory state store" May 17 00:54:44.582825 kubelet[2048]: I0517 00:54:44.582803 2048 policy_none.go:49] "None policy: Start" May 17 00:54:44.583346 kubelet[2048]: I0517 00:54:44.583330 2048 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:54:44.583440 kubelet[2048]: I0517 00:54:44.583350 2048 state_mem.go:35] "Initializing new in-memory state store" May 17 00:54:44.593539 systemd[1]: Created slice kubepods.slice. May 17 00:54:44.598224 systemd[1]: Created slice kubepods-burstable.slice. May 17 00:54:44.601042 systemd[1]: Created slice kubepods-besteffort.slice. May 17 00:54:44.605131 kubelet[2048]: E0517 00:54:44.605102 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:44.605512 kubelet[2048]: E0517 00:54:44.605486 2048 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-34d8c498b2?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="400ms" May 17 00:54:44.607181 kubelet[2048]: I0517 00:54:44.607159 2048 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:54:44.607312 kubelet[2048]: I0517 00:54:44.607296 2048 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:54:44.607407 kubelet[2048]: I0517 00:54:44.607311 2048 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:54:44.608159 kubelet[2048]: I0517 00:54:44.608032 2048 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:54:44.610759 kubelet[2048]: E0517 00:54:44.610737 2048 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:44.710074 kubelet[2048]: I0517 00:54:44.710039 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.710498 kubelet[2048]: E0517 00:54:44.710463 2048 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.770138 systemd[1]: Created slice kubepods-burstable-pod5dd6c97a1c568ddaf6a11b07de3c0487.slice. May 17 00:54:44.779676 systemd[1]: Created slice kubepods-burstable-podc191314ae758f8b44221cf96358eb500.slice. May 17 00:54:44.788450 systemd[1]: Created slice kubepods-burstable-pod3a89d00db651627f6e601abe692fa5a7.slice. May 17 00:54:44.809469 kubelet[2048]: I0517 00:54:44.809436 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dd6c97a1c568ddaf6a11b07de3c0487-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-34d8c498b2\" (UID: \"5dd6c97a1c568ddaf6a11b07de3c0487\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809575 kubelet[2048]: I0517 00:54:44.809470 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809575 kubelet[2048]: I0517 00:54:44.809500 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809575 kubelet[2048]: I0517 00:54:44.809523 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809575 kubelet[2048]: I0517 00:54:44.809547 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809575 kubelet[2048]: I0517 00:54:44.809570 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809790 kubelet[2048]: I0517 00:54:44.809593 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809790 kubelet[2048]: I0517 00:54:44.809619 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.809790 kubelet[2048]: I0517 00:54:44.809644 2048 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.913419 kubelet[2048]: I0517 00:54:44.912654 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:44.913930 kubelet[2048]: E0517 00:54:44.913891 2048 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:45.006563 kubelet[2048]: E0517 00:54:45.006512 2048 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-34d8c498b2?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="800ms" May 17 00:54:45.078764 env[1438]: time="2025-05-17T00:54:45.078708606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-34d8c498b2,Uid:5dd6c97a1c568ddaf6a11b07de3c0487,Namespace:kube-system,Attempt:0,}" May 17 00:54:45.087972 env[1438]: time="2025-05-17T00:54:45.087938628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-34d8c498b2,Uid:c191314ae758f8b44221cf96358eb500,Namespace:kube-system,Attempt:0,}" May 17 00:54:45.091812 env[1438]: time="2025-05-17T00:54:45.091424212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-34d8c498b2,Uid:3a89d00db651627f6e601abe692fa5a7,Namespace:kube-system,Attempt:0,}" May 17 00:54:45.235762 kubelet[2048]: W0517 00:54:45.235706 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:45.235894 kubelet[2048]: E0517 00:54:45.235771 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:45.263384 kubelet[2048]: W0517 00:54:45.263317 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:45.263481 kubelet[2048]: E0517 00:54:45.263391 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:45.315825 kubelet[2048]: I0517 00:54:45.315790 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:45.316225 kubelet[2048]: E0517 00:54:45.316189 2048 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:45.348775 kubelet[2048]: W0517 00:54:45.348738 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:45.348910 kubelet[2048]: E0517 00:54:45.348788 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:45.684705 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1014647738.mount: Deactivated successfully. May 17 00:54:45.807537 kubelet[2048]: E0517 00:54:45.807465 2048 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-34d8c498b2?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="1.6s" May 17 00:54:45.816492 env[1438]: time="2025-05-17T00:54:45.816339046Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:45.820094 env[1438]: time="2025-05-17T00:54:45.820056436Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:45.828358 env[1438]: time="2025-05-17T00:54:45.828320034Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:45.918089 kubelet[2048]: W0517 00:54:45.918019 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:45.918518 kubelet[2048]: E0517 00:54:45.918096 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:46.118103 kubelet[2048]: I0517 00:54:46.117722 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:46.118360 kubelet[2048]: E0517 00:54:46.118328 2048 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:46.422620 kubelet[2048]: E0517 00:54:46.422556 2048 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.200.4.13:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:46.953823 kubelet[2048]: W0517 00:54:46.953780 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:46.954231 kubelet[2048]: E0517 00:54:46.953840 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.200.4.13:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.7-n-34d8c498b2&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:47.408793 kubelet[2048]: E0517 00:54:47.408664 2048 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.4.13:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.7-n-34d8c498b2?timeout=10s\": dial tcp 10.200.4.13:6443: connect: connection refused" interval="3.2s" May 17 00:54:47.720650 kubelet[2048]: I0517 00:54:47.720614 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:47.721097 kubelet[2048]: E0517 00:54:47.721060 2048 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://10.200.4.13:6443/api/v1/nodes\": dial tcp 10.200.4.13:6443: connect: connection refused" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:48.146293 kubelet[2048]: W0517 00:54:48.146181 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:48.146293 kubelet[2048]: E0517 00:54:48.146229 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.200.4.13:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:48.269713 kubelet[2048]: W0517 00:54:48.269668 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:48.269951 kubelet[2048]: E0517 00:54:48.269725 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.200.4.13:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:48.924818 env[1438]: time="2025-05-17T00:54:48.924767034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:48.966260 kubelet[2048]: W0517 00:54:48.966219 2048 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.4.13:6443: connect: connection refused May 17 00:54:48.966437 kubelet[2048]: E0517 00:54:48.966285 2048 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.200.4.13:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.200.4.13:6443: connect: connection refused" logger="UnhandledError" May 17 00:54:49.466284 env[1438]: time="2025-05-17T00:54:49.466230260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.470324 env[1438]: time="2025-05-17T00:54:49.470275547Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.472482 env[1438]: time="2025-05-17T00:54:49.472438794Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.475976 env[1438]: time="2025-05-17T00:54:49.475949470Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.478018 env[1438]: time="2025-05-17T00:54:49.477927512Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.483487 env[1438]: time="2025-05-17T00:54:49.483445031Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.486391 env[1438]: time="2025-05-17T00:54:49.486350894Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.505024 env[1438]: time="2025-05-17T00:54:49.504925195Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:54:49.562409 env[1438]: time="2025-05-17T00:54:49.557941538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:54:49.562409 env[1438]: time="2025-05-17T00:54:49.558010840Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:54:49.562409 env[1438]: time="2025-05-17T00:54:49.558031640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:54:49.562409 env[1438]: time="2025-05-17T00:54:49.558221244Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/7f3646ce2c15c4e96393d627a5d6ecff0f6cd63a5ea0218ff2864edc4a6db337 pid=2088 runtime=io.containerd.runc.v2 May 17 00:54:49.578070 env[1438]: time="2025-05-17T00:54:49.577999371Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:54:49.578342 env[1438]: time="2025-05-17T00:54:49.578287777Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:54:49.578526 env[1438]: time="2025-05-17T00:54:49.578499882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:54:49.578848 env[1438]: time="2025-05-17T00:54:49.578803588Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b650c955a7d1c2a58ab89835976e2d44d0b4a728381f65363e52696808f1fce5 pid=2112 runtime=io.containerd.runc.v2 May 17 00:54:49.595740 systemd[1]: Started cri-containerd-7f3646ce2c15c4e96393d627a5d6ecff0f6cd63a5ea0218ff2864edc4a6db337.scope. May 17 00:54:49.612456 systemd[1]: Started cri-containerd-b650c955a7d1c2a58ab89835976e2d44d0b4a728381f65363e52696808f1fce5.scope. May 17 00:54:49.624327 env[1438]: time="2025-05-17T00:54:49.624263469Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:54:49.624519 env[1438]: time="2025-05-17T00:54:49.624339870Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:54:49.624519 env[1438]: time="2025-05-17T00:54:49.624391471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:54:49.624627 env[1438]: time="2025-05-17T00:54:49.624571175Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/92e14ae63bb40b36894022687954f6545fef4d28e7b0ccefdb54d8b79f2a3b17 pid=2151 runtime=io.containerd.runc.v2 May 17 00:54:49.646951 systemd[1]: Started cri-containerd-92e14ae63bb40b36894022687954f6545fef4d28e7b0ccefdb54d8b79f2a3b17.scope. May 17 00:54:49.678440 env[1438]: time="2025-05-17T00:54:49.678394036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.7-n-34d8c498b2,Uid:3a89d00db651627f6e601abe692fa5a7,Namespace:kube-system,Attempt:0,} returns sandbox id \"b650c955a7d1c2a58ab89835976e2d44d0b4a728381f65363e52696808f1fce5\"" May 17 00:54:49.686300 env[1438]: time="2025-05-17T00:54:49.685444988Z" level=info msg="CreateContainer within sandbox \"b650c955a7d1c2a58ab89835976e2d44d0b4a728381f65363e52696808f1fce5\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 17 00:54:49.699421 env[1438]: time="2025-05-17T00:54:49.699347188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.7-n-34d8c498b2,Uid:c191314ae758f8b44221cf96358eb500,Namespace:kube-system,Attempt:0,} returns sandbox id \"7f3646ce2c15c4e96393d627a5d6ecff0f6cd63a5ea0218ff2864edc4a6db337\"" May 17 00:54:49.701946 env[1438]: time="2025-05-17T00:54:49.701914944Z" level=info msg="CreateContainer within sandbox \"7f3646ce2c15c4e96393d627a5d6ecff0f6cd63a5ea0218ff2864edc4a6db337\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 17 00:54:49.724326 env[1438]: time="2025-05-17T00:54:49.724250825Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.7-n-34d8c498b2,Uid:5dd6c97a1c568ddaf6a11b07de3c0487,Namespace:kube-system,Attempt:0,} returns sandbox id \"92e14ae63bb40b36894022687954f6545fef4d28e7b0ccefdb54d8b79f2a3b17\"" May 17 00:54:49.729353 env[1438]: time="2025-05-17T00:54:49.729322635Z" level=info msg="CreateContainer within sandbox \"92e14ae63bb40b36894022687954f6545fef4d28e7b0ccefdb54d8b79f2a3b17\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 17 00:54:49.733139 env[1438]: time="2025-05-17T00:54:49.733081116Z" level=info msg="CreateContainer within sandbox \"b650c955a7d1c2a58ab89835976e2d44d0b4a728381f65363e52696808f1fce5\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c3dd54303a420c631de84473106513ec4cb7bd9f5f438130d0c0d7ef7874369d\"" May 17 00:54:49.734025 env[1438]: time="2025-05-17T00:54:49.734004536Z" level=info msg="StartContainer for \"c3dd54303a420c631de84473106513ec4cb7bd9f5f438130d0c0d7ef7874369d\"" May 17 00:54:49.749475 systemd[1]: Started cri-containerd-c3dd54303a420c631de84473106513ec4cb7bd9f5f438130d0c0d7ef7874369d.scope. May 17 00:54:49.779876 env[1438]: time="2025-05-17T00:54:49.779831124Z" level=info msg="CreateContainer within sandbox \"7f3646ce2c15c4e96393d627a5d6ecff0f6cd63a5ea0218ff2864edc4a6db337\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"03138e22783248941c4f42bf5bc0fa7c5d5d4d54497ed4a41e5edb69326569d5\"" May 17 00:54:49.780253 env[1438]: time="2025-05-17T00:54:49.780220132Z" level=info msg="StartContainer for \"03138e22783248941c4f42bf5bc0fa7c5d5d4d54497ed4a41e5edb69326569d5\"" May 17 00:54:49.788658 env[1438]: time="2025-05-17T00:54:49.788622314Z" level=info msg="CreateContainer within sandbox \"92e14ae63bb40b36894022687954f6545fef4d28e7b0ccefdb54d8b79f2a3b17\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"ae8b56c89e4cbcdf41a311520fba6c25def2c29096a90de067a9614b65aaefde\"" May 17 00:54:49.788980 env[1438]: time="2025-05-17T00:54:49.788952321Z" level=info msg="StartContainer for \"ae8b56c89e4cbcdf41a311520fba6c25def2c29096a90de067a9614b65aaefde\"" May 17 00:54:49.806321 systemd[1]: Started cri-containerd-03138e22783248941c4f42bf5bc0fa7c5d5d4d54497ed4a41e5edb69326569d5.scope. May 17 00:54:49.834001 env[1438]: time="2025-05-17T00:54:49.833960892Z" level=info msg="StartContainer for \"c3dd54303a420c631de84473106513ec4cb7bd9f5f438130d0c0d7ef7874369d\" returns successfully" May 17 00:54:49.834005 systemd[1]: Started cri-containerd-ae8b56c89e4cbcdf41a311520fba6c25def2c29096a90de067a9614b65aaefde.scope. May 17 00:54:49.903233 env[1438]: time="2025-05-17T00:54:49.903182785Z" level=info msg="StartContainer for \"03138e22783248941c4f42bf5bc0fa7c5d5d4d54497ed4a41e5edb69326569d5\" returns successfully" May 17 00:54:49.977931 env[1438]: time="2025-05-17T00:54:49.977824295Z" level=info msg="StartContainer for \"ae8b56c89e4cbcdf41a311520fba6c25def2c29096a90de067a9614b65aaefde\" returns successfully" May 17 00:54:50.923279 kubelet[2048]: I0517 00:54:50.923238 2048 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:52.106996 kubelet[2048]: E0517 00:54:52.106941 2048 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.7-n-34d8c498b2\" not found" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:52.244684 kubelet[2048]: I0517 00:54:52.244650 2048 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:52.244955 kubelet[2048]: E0517 00:54:52.244934 2048 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"ci-3510.3.7-n-34d8c498b2\": node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.263723 kubelet[2048]: E0517 00:54:52.263669 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.365044 kubelet[2048]: E0517 00:54:52.364826 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.465613 kubelet[2048]: E0517 00:54:52.465524 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.566093 kubelet[2048]: E0517 00:54:52.565983 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.667152 kubelet[2048]: E0517 00:54:52.667041 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.767846 kubelet[2048]: E0517 00:54:52.767757 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.868248 kubelet[2048]: E0517 00:54:52.868166 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:52.969275 kubelet[2048]: E0517 00:54:52.969233 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:53.069545 kubelet[2048]: E0517 00:54:53.069500 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:53.169895 kubelet[2048]: E0517 00:54:53.169855 2048 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"ci-3510.3.7-n-34d8c498b2\" not found" May 17 00:54:53.390069 kubelet[2048]: I0517 00:54:53.389843 2048 apiserver.go:52] "Watching apiserver" May 17 00:54:53.405614 kubelet[2048]: I0517 00:54:53.405578 2048 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:54:54.333104 systemd[1]: Reloading. May 17 00:54:54.429604 /usr/lib/systemd/system-generators/torcx-generator[2346]: time="2025-05-17T00:54:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.7 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.7 /var/lib/torcx/store]" May 17 00:54:54.430059 /usr/lib/systemd/system-generators/torcx-generator[2346]: time="2025-05-17T00:54:54Z" level=info msg="torcx already run" May 17 00:54:54.473753 kubelet[2048]: W0517 00:54:54.473679 2048 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:54:54.521116 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. May 17 00:54:54.521135 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. May 17 00:54:54.538863 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 17 00:54:54.693002 systemd[1]: Stopping kubelet.service... May 17 00:54:54.714066 systemd[1]: kubelet.service: Deactivated successfully. May 17 00:54:54.714293 systemd[1]: Stopped kubelet.service. May 17 00:54:54.717921 systemd[1]: Starting kubelet.service... May 17 00:54:54.882177 systemd[1]: Started kubelet.service. May 17 00:54:54.947153 kubelet[2412]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:54:54.947153 kubelet[2412]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 17 00:54:54.947153 kubelet[2412]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 17 00:54:54.947153 kubelet[2412]: I0517 00:54:54.946805 2412 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 17 00:54:54.954009 kubelet[2412]: I0517 00:54:54.953673 2412 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" May 17 00:54:54.954009 kubelet[2412]: I0517 00:54:54.953697 2412 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 17 00:54:54.954525 kubelet[2412]: I0517 00:54:54.954490 2412 server.go:934] "Client rotation is on, will bootstrap in background" May 17 00:54:54.957248 kubelet[2412]: I0517 00:54:54.957216 2412 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 17 00:54:54.959861 kubelet[2412]: I0517 00:54:54.959843 2412 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 17 00:54:54.970759 kubelet[2412]: E0517 00:54:54.970720 2412 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" May 17 00:54:54.970759 kubelet[2412]: I0517 00:54:54.970759 2412 server.go:1408] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." May 17 00:54:54.973646 kubelet[2412]: I0517 00:54:54.973622 2412 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 17 00:54:54.973738 kubelet[2412]: I0517 00:54:54.973726 2412 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" May 17 00:54:54.973871 kubelet[2412]: I0517 00:54:54.973837 2412 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 17 00:54:54.974029 kubelet[2412]: I0517 00:54:54.973870 2412 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-3510.3.7-n-34d8c498b2","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} May 17 00:54:54.974163 kubelet[2412]: I0517 00:54:54.974039 2412 topology_manager.go:138] "Creating topology manager with none policy" May 17 00:54:54.974163 kubelet[2412]: I0517 00:54:54.974051 2412 container_manager_linux.go:300] "Creating device plugin manager" May 17 00:54:54.974163 kubelet[2412]: I0517 00:54:54.974086 2412 state_mem.go:36] "Initialized new in-memory state store" May 17 00:54:54.974294 kubelet[2412]: I0517 00:54:54.974185 2412 kubelet.go:408] "Attempting to sync node with API server" May 17 00:54:54.974294 kubelet[2412]: I0517 00:54:54.974201 2412 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" May 17 00:54:54.974294 kubelet[2412]: I0517 00:54:54.974228 2412 kubelet.go:314] "Adding apiserver pod source" May 17 00:54:54.974294 kubelet[2412]: I0517 00:54:54.974240 2412 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 17 00:54:54.979442 kubelet[2412]: I0517 00:54:54.979412 2412 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:54.980044 2412 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:54.980537 2412 server.go:1274] "Started kubelet" May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:54.986165 2412 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:54.987738 2412 server.go:449] "Adding debug handlers to kubelet server" May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:54.991061 2412 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 17 00:54:55.477136 kubelet[2412]: E0517 00:54:55.005161 2412 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 17 00:54:55.477136 kubelet[2412]: I0517 00:54:55.475490 2412 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 17 00:54:55.477615 kubelet[2412]: I0517 00:54:55.477599 2412 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 17 00:54:55.477977 kubelet[2412]: I0517 00:54:55.477956 2412 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" May 17 00:54:55.480907 kubelet[2412]: I0517 00:54:55.479667 2412 volume_manager.go:289] "Starting Kubelet Volume Manager" May 17 00:54:55.480907 kubelet[2412]: I0517 00:54:55.479909 2412 desired_state_of_world_populator.go:147] "Desired state populator starts to run" May 17 00:54:55.480907 kubelet[2412]: I0517 00:54:55.480165 2412 reconciler.go:26] "Reconciler: start to sync state" May 17 00:54:55.484595 kubelet[2412]: I0517 00:54:55.484574 2412 factory.go:221] Registration of the systemd container factory successfully May 17 00:54:55.484891 kubelet[2412]: I0517 00:54:55.484827 2412 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 17 00:54:55.487179 kubelet[2412]: I0517 00:54:55.486818 2412 factory.go:221] Registration of the containerd container factory successfully May 17 00:54:55.501769 kubelet[2412]: I0517 00:54:55.501743 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 17 00:54:55.506445 kubelet[2412]: I0517 00:54:55.506426 2412 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 17 00:54:55.507439 kubelet[2412]: I0517 00:54:55.507421 2412 status_manager.go:217] "Starting to sync pod status with apiserver" May 17 00:54:55.507531 kubelet[2412]: I0517 00:54:55.507449 2412 kubelet.go:2321] "Starting kubelet main sync loop" May 17 00:54:55.507531 kubelet[2412]: E0517 00:54:55.507492 2412 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 17 00:54:55.529266 kubelet[2412]: I0517 00:54:55.529245 2412 cpu_manager.go:214] "Starting CPU manager" policy="none" May 17 00:54:55.529266 kubelet[2412]: I0517 00:54:55.529259 2412 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 17 00:54:55.529447 kubelet[2412]: I0517 00:54:55.529280 2412 state_mem.go:36] "Initialized new in-memory state store" May 17 00:54:55.529499 kubelet[2412]: I0517 00:54:55.529453 2412 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 17 00:54:55.529499 kubelet[2412]: I0517 00:54:55.529468 2412 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 17 00:54:55.529588 kubelet[2412]: I0517 00:54:55.529502 2412 policy_none.go:49] "None policy: Start" May 17 00:54:55.530096 kubelet[2412]: I0517 00:54:55.530076 2412 memory_manager.go:170] "Starting memorymanager" policy="None" May 17 00:54:55.530176 kubelet[2412]: I0517 00:54:55.530100 2412 state_mem.go:35] "Initializing new in-memory state store" May 17 00:54:55.530271 kubelet[2412]: I0517 00:54:55.530254 2412 state_mem.go:75] "Updated machine memory state" May 17 00:54:55.533912 kubelet[2412]: I0517 00:54:55.533885 2412 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 17 00:54:55.534071 kubelet[2412]: I0517 00:54:55.534052 2412 eviction_manager.go:189] "Eviction manager: starting control loop" May 17 00:54:55.534135 kubelet[2412]: I0517 00:54:55.534069 2412 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 17 00:54:55.535515 kubelet[2412]: I0517 00:54:55.535042 2412 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 17 00:54:55.625931 kubelet[2412]: W0517 00:54:55.625884 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:54:55.626114 kubelet[2412]: W0517 00:54:55.626038 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:54:55.626451 kubelet[2412]: W0517 00:54:55.626431 2412 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 17 00:54:55.626642 kubelet[2412]: E0517 00:54:55.626608 2412 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.642088 kubelet[2412]: I0517 00:54:55.642067 2412 kubelet_node_status.go:72] "Attempting to register node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.654680 kubelet[2412]: I0517 00:54:55.654652 2412 kubelet_node_status.go:111] "Node was previously registered" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.654779 kubelet[2412]: I0517 00:54:55.654733 2412 kubelet_node_status.go:75] "Successfully registered node" node="ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.780766 kubelet[2412]: I0517 00:54:55.780655 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-k8s-certs\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.780766 kubelet[2412]: I0517 00:54:55.780689 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.780766 kubelet[2412]: I0517 00:54:55.780719 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.780766 kubelet[2412]: I0517 00:54:55.780741 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.781056 kubelet[2412]: I0517 00:54:55.780766 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.781056 kubelet[2412]: I0517 00:54:55.780793 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5dd6c97a1c568ddaf6a11b07de3c0487-kubeconfig\") pod \"kube-scheduler-ci-3510.3.7-n-34d8c498b2\" (UID: \"5dd6c97a1c568ddaf6a11b07de3c0487\") " pod="kube-system/kube-scheduler-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.781056 kubelet[2412]: I0517 00:54:55.780812 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-ca-certs\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.781056 kubelet[2412]: I0517 00:54:55.780834 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c191314ae758f8b44221cf96358eb500-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.7-n-34d8c498b2\" (UID: \"c191314ae758f8b44221cf96358eb500\") " pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.781056 kubelet[2412]: I0517 00:54:55.780857 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3a89d00db651627f6e601abe692fa5a7-ca-certs\") pod \"kube-controller-manager-ci-3510.3.7-n-34d8c498b2\" (UID: \"3a89d00db651627f6e601abe692fa5a7\") " pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" May 17 00:54:55.979227 kubelet[2412]: I0517 00:54:55.979190 2412 apiserver.go:52] "Watching apiserver" May 17 00:54:55.981126 kubelet[2412]: I0517 00:54:55.981093 2412 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" May 17 00:54:56.159432 kubelet[2412]: I0517 00:54:56.159296 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.7-n-34d8c498b2" podStartSLOduration=1.1592763289999999 podStartE2EDuration="1.159276329s" podCreationTimestamp="2025-05-17 00:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:54:56.157943506 +0000 UTC m=+1.269413628" watchObservedRunningTime="2025-05-17 00:54:56.159276329 +0000 UTC m=+1.270746451" May 17 00:54:56.183884 kubelet[2412]: I0517 00:54:56.183823 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.7-n-34d8c498b2" podStartSLOduration=1.183801069 podStartE2EDuration="1.183801069s" podCreationTimestamp="2025-05-17 00:54:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:54:56.171800554 +0000 UTC m=+1.283270576" watchObservedRunningTime="2025-05-17 00:54:56.183801069 +0000 UTC m=+1.295271091" May 17 00:54:56.199576 kubelet[2412]: I0517 00:54:56.199529 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.7-n-34d8c498b2" podStartSLOduration=2.19949775 podStartE2EDuration="2.19949775s" podCreationTimestamp="2025-05-17 00:54:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:54:56.184808387 +0000 UTC m=+1.296278409" watchObservedRunningTime="2025-05-17 00:54:56.19949775 +0000 UTC m=+1.310967772" May 17 00:54:56.758446 sudo[1725]: pam_unix(sudo:session): session closed for user root May 17 00:54:56.891052 sshd[1722]: pam_unix(sshd:session): session closed for user core May 17 00:54:56.894547 systemd[1]: sshd@4-10.200.4.13:22-10.200.16.10:39966.service: Deactivated successfully. May 17 00:54:56.895658 systemd[1]: session-7.scope: Deactivated successfully. May 17 00:54:56.895876 systemd[1]: session-7.scope: Consumed 3.145s CPU time. May 17 00:54:56.896530 systemd-logind[1424]: Session 7 logged out. Waiting for processes to exit. May 17 00:54:56.897668 systemd-logind[1424]: Removed session 7. May 17 00:54:59.213560 systemd[1]: Created slice kubepods-besteffort-poda51b5c9c_f954_430a_b133_bb1008677456.slice. May 17 00:54:59.226069 systemd[1]: Created slice kubepods-burstable-pod5799d9ee_0af9_43a7_b043_ea882789a3e3.slice. May 17 00:54:59.261151 kubelet[2412]: I0517 00:54:59.261102 2412 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 17 00:54:59.261967 env[1438]: time="2025-05-17T00:54:59.261924214Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 17 00:54:59.262322 kubelet[2412]: I0517 00:54:59.262164 2412 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 17 00:54:59.401983 kubelet[2412]: I0517 00:54:59.401929 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/5799d9ee-0af9-43a7-b043-ea882789a3e3-cni-plugin\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.401983 kubelet[2412]: I0517 00:54:59.401978 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/5799d9ee-0af9-43a7-b043-ea882789a3e3-flannel-cfg\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.402255 kubelet[2412]: I0517 00:54:59.402010 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a51b5c9c-f954-430a-b133-bb1008677456-xtables-lock\") pod \"kube-proxy-n8x58\" (UID: \"a51b5c9c-f954-430a-b133-bb1008677456\") " pod="kube-system/kube-proxy-n8x58" May 17 00:54:59.402255 kubelet[2412]: I0517 00:54:59.402039 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a51b5c9c-f954-430a-b133-bb1008677456-lib-modules\") pod \"kube-proxy-n8x58\" (UID: \"a51b5c9c-f954-430a-b133-bb1008677456\") " pod="kube-system/kube-proxy-n8x58" May 17 00:54:59.402255 kubelet[2412]: I0517 00:54:59.402063 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/5799d9ee-0af9-43a7-b043-ea882789a3e3-cni\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.402255 kubelet[2412]: I0517 00:54:59.402093 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a51b5c9c-f954-430a-b133-bb1008677456-kube-proxy\") pod \"kube-proxy-n8x58\" (UID: \"a51b5c9c-f954-430a-b133-bb1008677456\") " pod="kube-system/kube-proxy-n8x58" May 17 00:54:59.402255 kubelet[2412]: I0517 00:54:59.402124 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkp5x\" (UniqueName: \"kubernetes.io/projected/a51b5c9c-f954-430a-b133-bb1008677456-kube-api-access-zkp5x\") pod \"kube-proxy-n8x58\" (UID: \"a51b5c9c-f954-430a-b133-bb1008677456\") " pod="kube-system/kube-proxy-n8x58" May 17 00:54:59.402521 kubelet[2412]: I0517 00:54:59.402154 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5799d9ee-0af9-43a7-b043-ea882789a3e3-run\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.402521 kubelet[2412]: I0517 00:54:59.402179 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5799d9ee-0af9-43a7-b043-ea882789a3e3-xtables-lock\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.402521 kubelet[2412]: I0517 00:54:59.402212 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nv225\" (UniqueName: \"kubernetes.io/projected/5799d9ee-0af9-43a7-b043-ea882789a3e3-kube-api-access-nv225\") pod \"kube-flannel-ds-pmq4r\" (UID: \"5799d9ee-0af9-43a7-b043-ea882789a3e3\") " pod="kube-flannel/kube-flannel-ds-pmq4r" May 17 00:54:59.512311 kubelet[2412]: E0517 00:54:59.512172 2412 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:54:59.512311 kubelet[2412]: E0517 00:54:59.512209 2412 projected.go:194] Error preparing data for projected volume kube-api-access-zkp5x for pod kube-system/kube-proxy-n8x58: configmap "kube-root-ca.crt" not found May 17 00:54:59.512311 kubelet[2412]: E0517 00:54:59.512267 2412 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a51b5c9c-f954-430a-b133-bb1008677456-kube-api-access-zkp5x podName:a51b5c9c-f954-430a-b133-bb1008677456 nodeName:}" failed. No retries permitted until 2025-05-17 00:55:00.012248166 +0000 UTC m=+5.123718188 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-zkp5x" (UniqueName: "kubernetes.io/projected/a51b5c9c-f954-430a-b133-bb1008677456-kube-api-access-zkp5x") pod "kube-proxy-n8x58" (UID: "a51b5c9c-f954-430a-b133-bb1008677456") : configmap "kube-root-ca.crt" not found May 17 00:54:59.513683 kubelet[2412]: E0517 00:54:59.513654 2412 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:54:59.513787 kubelet[2412]: E0517 00:54:59.513694 2412 projected.go:194] Error preparing data for projected volume kube-api-access-nv225 for pod kube-flannel/kube-flannel-ds-pmq4r: configmap "kube-root-ca.crt" not found May 17 00:54:59.513787 kubelet[2412]: E0517 00:54:59.513741 2412 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5799d9ee-0af9-43a7-b043-ea882789a3e3-kube-api-access-nv225 podName:5799d9ee-0af9-43a7-b043-ea882789a3e3 nodeName:}" failed. No retries permitted until 2025-05-17 00:55:00.01372229 +0000 UTC m=+5.125192312 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-nv225" (UniqueName: "kubernetes.io/projected/5799d9ee-0af9-43a7-b043-ea882789a3e3-kube-api-access-nv225") pod "kube-flannel-ds-pmq4r" (UID: "5799d9ee-0af9-43a7-b043-ea882789a3e3") : configmap "kube-root-ca.crt" not found May 17 00:55:00.106261 kubelet[2412]: E0517 00:55:00.106214 2412 projected.go:288] Couldn't get configMap kube-flannel/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:55:00.106261 kubelet[2412]: E0517 00:55:00.106254 2412 projected.go:194] Error preparing data for projected volume kube-api-access-nv225 for pod kube-flannel/kube-flannel-ds-pmq4r: configmap "kube-root-ca.crt" not found May 17 00:55:00.106549 kubelet[2412]: E0517 00:55:00.106332 2412 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5799d9ee-0af9-43a7-b043-ea882789a3e3-kube-api-access-nv225 podName:5799d9ee-0af9-43a7-b043-ea882789a3e3 nodeName:}" failed. No retries permitted until 2025-05-17 00:55:01.106311974 +0000 UTC m=+6.217782096 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-nv225" (UniqueName: "kubernetes.io/projected/5799d9ee-0af9-43a7-b043-ea882789a3e3-kube-api-access-nv225") pod "kube-flannel-ds-pmq4r" (UID: "5799d9ee-0af9-43a7-b043-ea882789a3e3") : configmap "kube-root-ca.crt" not found May 17 00:55:00.106549 kubelet[2412]: E0517 00:55:00.106214 2412 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found May 17 00:55:00.106549 kubelet[2412]: E0517 00:55:00.106382 2412 projected.go:194] Error preparing data for projected volume kube-api-access-zkp5x for pod kube-system/kube-proxy-n8x58: configmap "kube-root-ca.crt" not found May 17 00:55:00.106549 kubelet[2412]: E0517 00:55:00.106422 2412 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/a51b5c9c-f954-430a-b133-bb1008677456-kube-api-access-zkp5x podName:a51b5c9c-f954-430a-b133-bb1008677456 nodeName:}" failed. No retries permitted until 2025-05-17 00:55:01.106412076 +0000 UTC m=+6.217882098 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-zkp5x" (UniqueName: "kubernetes.io/projected/a51b5c9c-f954-430a-b133-bb1008677456-kube-api-access-zkp5x") pod "kube-proxy-n8x58" (UID: "a51b5c9c-f954-430a-b133-bb1008677456") : configmap "kube-root-ca.crt" not found May 17 00:55:01.111057 kubelet[2412]: I0517 00:55:01.110994 2412 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" May 17 00:55:01.324028 env[1438]: time="2025-05-17T00:55:01.323982033Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8x58,Uid:a51b5c9c-f954-430a-b133-bb1008677456,Namespace:kube-system,Attempt:0,}" May 17 00:55:01.329944 env[1438]: time="2025-05-17T00:55:01.329908126Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pmq4r,Uid:5799d9ee-0af9-43a7-b043-ea882789a3e3,Namespace:kube-flannel,Attempt:0,}" May 17 00:55:01.388026 env[1438]: time="2025-05-17T00:55:01.387754538Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:55:01.388026 env[1438]: time="2025-05-17T00:55:01.387796939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:55:01.388026 env[1438]: time="2025-05-17T00:55:01.387811839Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:55:01.388470 env[1438]: time="2025-05-17T00:55:01.387992842Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f7f43cacc0e4a0d9f0191c7523368c7882b3f4e3dfc9a802830551ab6ec4ad12 pid=2474 runtime=io.containerd.runc.v2 May 17 00:55:01.397580 env[1438]: time="2025-05-17T00:55:01.397396790Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:55:01.397580 env[1438]: time="2025-05-17T00:55:01.397440891Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:55:01.397580 env[1438]: time="2025-05-17T00:55:01.397464491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:55:01.397969 env[1438]: time="2025-05-17T00:55:01.397891198Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372 pid=2496 runtime=io.containerd.runc.v2 May 17 00:55:01.407723 systemd[1]: Started cri-containerd-f7f43cacc0e4a0d9f0191c7523368c7882b3f4e3dfc9a802830551ab6ec4ad12.scope. May 17 00:55:01.421348 systemd[1]: Started cri-containerd-9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372.scope. May 17 00:55:01.451945 env[1438]: time="2025-05-17T00:55:01.451885349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-n8x58,Uid:a51b5c9c-f954-430a-b133-bb1008677456,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7f43cacc0e4a0d9f0191c7523368c7882b3f4e3dfc9a802830551ab6ec4ad12\"" May 17 00:55:01.456224 env[1438]: time="2025-05-17T00:55:01.456190317Z" level=info msg="CreateContainer within sandbox \"f7f43cacc0e4a0d9f0191c7523368c7882b3f4e3dfc9a802830551ab6ec4ad12\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 17 00:55:01.474866 env[1438]: time="2025-05-17T00:55:01.474825011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-pmq4r,Uid:5799d9ee-0af9-43a7-b043-ea882789a3e3,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\"" May 17 00:55:01.477679 env[1438]: time="2025-05-17T00:55:01.477639855Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 17 00:55:01.496852 env[1438]: time="2025-05-17T00:55:01.496826358Z" level=info msg="CreateContainer within sandbox \"f7f43cacc0e4a0d9f0191c7523368c7882b3f4e3dfc9a802830551ab6ec4ad12\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"2368abe4687cfce27c2808a0fca4283dad966ca123352872595ee5cdb1df973f\"" May 17 00:55:01.498446 env[1438]: time="2025-05-17T00:55:01.498376582Z" level=info msg="StartContainer for \"2368abe4687cfce27c2808a0fca4283dad966ca123352872595ee5cdb1df973f\"" May 17 00:55:01.516538 systemd[1]: Started cri-containerd-2368abe4687cfce27c2808a0fca4283dad966ca123352872595ee5cdb1df973f.scope. May 17 00:55:01.555960 env[1438]: time="2025-05-17T00:55:01.555919389Z" level=info msg="StartContainer for \"2368abe4687cfce27c2808a0fca4283dad966ca123352872595ee5cdb1df973f\" returns successfully" May 17 00:55:03.582027 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2223194674.mount: Deactivated successfully. May 17 00:55:03.676470 env[1438]: time="2025-05-17T00:55:03.676411207Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:03.700395 env[1438]: time="2025-05-17T00:55:03.700327766Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:03.703601 env[1438]: time="2025-05-17T00:55:03.703565814Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:03.708412 env[1438]: time="2025-05-17T00:55:03.708345586Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:03.708841 env[1438]: time="2025-05-17T00:55:03.708810393Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 17 00:55:03.711557 env[1438]: time="2025-05-17T00:55:03.711132228Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 17 00:55:03.734336 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2747911463.mount: Deactivated successfully. May 17 00:55:03.752621 env[1438]: time="2025-05-17T00:55:03.752531948Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341\"" May 17 00:55:03.753358 env[1438]: time="2025-05-17T00:55:03.753308160Z" level=info msg="StartContainer for \"3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341\"" May 17 00:55:03.769780 systemd[1]: Started cri-containerd-3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341.scope. May 17 00:55:03.799948 systemd[1]: cri-containerd-3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341.scope: Deactivated successfully. May 17 00:55:03.801475 env[1438]: time="2025-05-17T00:55:03.801423482Z" level=info msg="StartContainer for \"3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341\" returns successfully" May 17 00:55:03.930804 env[1438]: time="2025-05-17T00:55:03.930746321Z" level=info msg="shim disconnected" id=3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341 May 17 00:55:03.930804 env[1438]: time="2025-05-17T00:55:03.930798021Z" level=warning msg="cleaning up after shim disconnected" id=3833a4ffe0c4a4c9acc94191b0932d8b9f1136ec1ec347b12665952c07d03341 namespace=k8s.io May 17 00:55:03.931092 env[1438]: time="2025-05-17T00:55:03.930810922Z" level=info msg="cleaning up dead shim" May 17 00:55:03.938483 env[1438]: time="2025-05-17T00:55:03.938445736Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:55:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2761 runtime=io.containerd.runc.v2\n" May 17 00:55:04.493805 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2010383182.mount: Deactivated successfully. May 17 00:55:04.547710 env[1438]: time="2025-05-17T00:55:04.547657170Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 17 00:55:04.563705 kubelet[2412]: I0517 00:55:04.562609 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-n8x58" podStartSLOduration=5.562562188 podStartE2EDuration="5.562562188s" podCreationTimestamp="2025-05-17 00:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:55:02.555802135 +0000 UTC m=+7.667272257" watchObservedRunningTime="2025-05-17 00:55:04.562562188 +0000 UTC m=+9.674032310" May 17 00:55:06.662152 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4206014120.mount: Deactivated successfully. May 17 00:55:07.707061 env[1438]: time="2025-05-17T00:55:07.706976699Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:07.716635 env[1438]: time="2025-05-17T00:55:07.716484528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:07.722348 env[1438]: time="2025-05-17T00:55:07.722263107Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:07.728504 env[1438]: time="2025-05-17T00:55:07.728394490Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" May 17 00:55:07.729315 env[1438]: time="2025-05-17T00:55:07.729284802Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 17 00:55:07.733313 env[1438]: time="2025-05-17T00:55:07.733233956Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 17 00:55:07.770552 env[1438]: time="2025-05-17T00:55:07.770517363Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257\"" May 17 00:55:07.771639 env[1438]: time="2025-05-17T00:55:07.770932868Z" level=info msg="StartContainer for \"7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257\"" May 17 00:55:07.797108 systemd[1]: Started cri-containerd-7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257.scope. May 17 00:55:07.821723 systemd[1]: cri-containerd-7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257.scope: Deactivated successfully. May 17 00:55:07.825788 env[1438]: time="2025-05-17T00:55:07.825746814Z" level=info msg="StartContainer for \"7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257\" returns successfully" May 17 00:55:07.834999 kubelet[2412]: I0517 00:55:07.834966 2412 kubelet_node_status.go:488] "Fast updating node status as it just became ready" May 17 00:55:07.849219 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257-rootfs.mount: Deactivated successfully. May 17 00:55:07.882445 systemd[1]: Created slice kubepods-burstable-podfea2abda_c81b_4643_a941_2d1eb0db1a8d.slice. May 17 00:55:07.893787 systemd[1]: Created slice kubepods-burstable-podacad8ee1_57d2_4df9_8e6f_eead6781fee8.slice. May 17 00:55:07.964078 kubelet[2412]: I0517 00:55:07.963960 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/fea2abda-c81b-4643-a941-2d1eb0db1a8d-config-volume\") pod \"coredns-7c65d6cfc9-hdc97\" (UID: \"fea2abda-c81b-4643-a941-2d1eb0db1a8d\") " pod="kube-system/coredns-7c65d6cfc9-hdc97" May 17 00:55:07.964315 kubelet[2412]: I0517 00:55:07.964293 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f56mn\" (UniqueName: \"kubernetes.io/projected/acad8ee1-57d2-4df9-8e6f-eead6781fee8-kube-api-access-f56mn\") pod \"coredns-7c65d6cfc9-nbbbc\" (UID: \"acad8ee1-57d2-4df9-8e6f-eead6781fee8\") " pod="kube-system/coredns-7c65d6cfc9-nbbbc" May 17 00:55:07.964463 kubelet[2412]: I0517 00:55:07.964446 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/acad8ee1-57d2-4df9-8e6f-eead6781fee8-config-volume\") pod \"coredns-7c65d6cfc9-nbbbc\" (UID: \"acad8ee1-57d2-4df9-8e6f-eead6781fee8\") " pod="kube-system/coredns-7c65d6cfc9-nbbbc" May 17 00:55:07.964591 kubelet[2412]: I0517 00:55:07.964577 2412 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tf7jh\" (UniqueName: \"kubernetes.io/projected/fea2abda-c81b-4643-a941-2d1eb0db1a8d-kube-api-access-tf7jh\") pod \"coredns-7c65d6cfc9-hdc97\" (UID: \"fea2abda-c81b-4643-a941-2d1eb0db1a8d\") " pod="kube-system/coredns-7c65d6cfc9-hdc97" May 17 00:55:08.228347 env[1438]: time="2025-05-17T00:55:08.227907409Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbbbc,Uid:acad8ee1-57d2-4df9-8e6f-eead6781fee8,Namespace:kube-system,Attempt:0,}" May 17 00:55:08.228347 env[1438]: time="2025-05-17T00:55:08.228047510Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdc97,Uid:fea2abda-c81b-4643-a941-2d1eb0db1a8d,Namespace:kube-system,Attempt:0,}" May 17 00:55:08.380735 env[1438]: time="2025-05-17T00:55:08.380666936Z" level=info msg="shim disconnected" id=7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257 May 17 00:55:08.380735 env[1438]: time="2025-05-17T00:55:08.380736237Z" level=warning msg="cleaning up after shim disconnected" id=7b1772b0dffd5565b09fcbca119b148fa647cc5782762d87232c804790e59257 namespace=k8s.io May 17 00:55:08.380990 env[1438]: time="2025-05-17T00:55:08.380747937Z" level=info msg="cleaning up dead shim" May 17 00:55:08.389472 env[1438]: time="2025-05-17T00:55:08.389435952Z" level=warning msg="cleanup warnings time=\"2025-05-17T00:55:08Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2821 runtime=io.containerd.runc.v2\n" May 17 00:55:08.444588 env[1438]: time="2025-05-17T00:55:08.444516683Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbbbc,Uid:acad8ee1-57d2-4df9-8e6f-eead6781fee8,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"799aadccda727545b8d90fea8814fe9412cd63287dcdece20466732cbb44c330\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 17 00:55:08.445031 kubelet[2412]: E0517 00:55:08.444989 2412 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799aadccda727545b8d90fea8814fe9412cd63287dcdece20466732cbb44c330\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 17 00:55:08.445141 kubelet[2412]: E0517 00:55:08.445056 2412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799aadccda727545b8d90fea8814fe9412cd63287dcdece20466732cbb44c330\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-nbbbc" May 17 00:55:08.445141 kubelet[2412]: E0517 00:55:08.445083 2412 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"799aadccda727545b8d90fea8814fe9412cd63287dcdece20466732cbb44c330\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-nbbbc" May 17 00:55:08.445234 kubelet[2412]: E0517 00:55:08.445137 2412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-nbbbc_kube-system(acad8ee1-57d2-4df9-8e6f-eead6781fee8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-nbbbc_kube-system(acad8ee1-57d2-4df9-8e6f-eead6781fee8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"799aadccda727545b8d90fea8814fe9412cd63287dcdece20466732cbb44c330\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-nbbbc" podUID="acad8ee1-57d2-4df9-8e6f-eead6781fee8" May 17 00:55:08.459993 env[1438]: time="2025-05-17T00:55:08.459953488Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdc97,Uid:fea2abda-c81b-4643-a941-2d1eb0db1a8d,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"381aa5ca02e9390e1894ee3cb6c63e08958d903c5b65432be85f567deee3b620\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 17 00:55:08.460493 kubelet[2412]: E0517 00:55:08.460198 2412 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381aa5ca02e9390e1894ee3cb6c63e08958d903c5b65432be85f567deee3b620\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 17 00:55:08.460493 kubelet[2412]: E0517 00:55:08.460249 2412 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381aa5ca02e9390e1894ee3cb6c63e08958d903c5b65432be85f567deee3b620\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-hdc97" May 17 00:55:08.460493 kubelet[2412]: E0517 00:55:08.460268 2412 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"381aa5ca02e9390e1894ee3cb6c63e08958d903c5b65432be85f567deee3b620\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7c65d6cfc9-hdc97" May 17 00:55:08.460493 kubelet[2412]: E0517 00:55:08.460308 2412 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-hdc97_kube-system(fea2abda-c81b-4643-a941-2d1eb0db1a8d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-hdc97_kube-system(fea2abda-c81b-4643-a941-2d1eb0db1a8d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"381aa5ca02e9390e1894ee3cb6c63e08958d903c5b65432be85f567deee3b620\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7c65d6cfc9-hdc97" podUID="fea2abda-c81b-4643-a941-2d1eb0db1a8d" May 17 00:55:08.608011 env[1438]: time="2025-05-17T00:55:08.602155275Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 17 00:55:08.636305 env[1438]: time="2025-05-17T00:55:08.636222027Z" level=info msg="CreateContainer within sandbox \"9136bafd249962ce376669a612dc8dad15b8dc84c15172c9c0ad53ee02517372\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"a878c72c91f2fb30efa220e2de033895c624102f8864b1fe2ef74cca91e81867\"" May 17 00:55:08.637803 env[1438]: time="2025-05-17T00:55:08.636933236Z" level=info msg="StartContainer for \"a878c72c91f2fb30efa220e2de033895c624102f8864b1fe2ef74cca91e81867\"" May 17 00:55:08.653957 systemd[1]: Started cri-containerd-a878c72c91f2fb30efa220e2de033895c624102f8864b1fe2ef74cca91e81867.scope. May 17 00:55:08.698546 env[1438]: time="2025-05-17T00:55:08.698486653Z" level=info msg="StartContainer for \"a878c72c91f2fb30efa220e2de033895c624102f8864b1fe2ef74cca91e81867\" returns successfully" May 17 00:55:09.867112 systemd-networkd[1586]: flannel.1: Link UP May 17 00:55:09.867122 systemd-networkd[1586]: flannel.1: Gained carrier May 17 00:55:11.232528 systemd-networkd[1586]: flannel.1: Gained IPv6LL May 17 00:55:23.509487 env[1438]: time="2025-05-17T00:55:23.509424545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbbbc,Uid:acad8ee1-57d2-4df9-8e6f-eead6781fee8,Namespace:kube-system,Attempt:0,}" May 17 00:55:23.510103 env[1438]: time="2025-05-17T00:55:23.510052051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdc97,Uid:fea2abda-c81b-4643-a941-2d1eb0db1a8d,Namespace:kube-system,Attempt:0,}" May 17 00:55:23.582127 systemd-networkd[1586]: cni0: Link UP May 17 00:55:23.582136 systemd-networkd[1586]: cni0: Gained carrier May 17 00:55:23.586263 systemd-networkd[1586]: cni0: Lost carrier May 17 00:55:23.620762 systemd-networkd[1586]: veth2eafb034: Link UP May 17 00:55:23.628157 kernel: cni0: port 1(veth2eafb034) entered blocking state May 17 00:55:23.628245 kernel: cni0: port 1(veth2eafb034) entered disabled state May 17 00:55:23.632630 kernel: device veth2eafb034 entered promiscuous mode May 17 00:55:23.641347 kernel: cni0: port 1(veth2eafb034) entered blocking state May 17 00:55:23.641422 kernel: cni0: port 1(veth2eafb034) entered forwarding state May 17 00:55:23.641447 kernel: cni0: port 1(veth2eafb034) entered disabled state May 17 00:55:23.642408 systemd-networkd[1586]: veth3c95da95: Link UP May 17 00:55:23.649420 kernel: cni0: port 2(veth3c95da95) entered blocking state May 17 00:55:23.649488 kernel: cni0: port 2(veth3c95da95) entered disabled state May 17 00:55:23.654936 kernel: device veth3c95da95 entered promiscuous mode May 17 00:55:23.654990 kernel: cni0: port 2(veth3c95da95) entered blocking state May 17 00:55:23.655020 kernel: cni0: port 2(veth3c95da95) entered forwarding state May 17 00:55:23.660511 kernel: cni0: port 2(veth3c95da95) entered disabled state May 17 00:55:23.677423 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth2eafb034: link becomes ready May 17 00:55:23.677486 kernel: cni0: port 1(veth2eafb034) entered blocking state May 17 00:55:23.677511 kernel: cni0: port 1(veth2eafb034) entered forwarding state May 17 00:55:23.678152 systemd-networkd[1586]: veth2eafb034: Gained carrier May 17 00:55:23.679051 systemd-networkd[1586]: cni0: Gained carrier May 17 00:55:23.684465 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth3c95da95: link becomes ready May 17 00:55:23.684945 env[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 17 00:55:23.684945 env[1438]: delegateAdd: netconf sent to delegate plugin: May 17 00:55:23.691081 kernel: cni0: port 2(veth3c95da95) entered blocking state May 17 00:55:23.691157 kernel: cni0: port 2(veth3c95da95) entered forwarding state May 17 00:55:23.691259 systemd-networkd[1586]: veth3c95da95: Gained carrier May 17 00:55:23.708111 env[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-17T00:55:23.708055027Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:55:23.708263 env[1438]: time="2025-05-17T00:55:23.708089027Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:55:23.708263 env[1438]: time="2025-05-17T00:55:23.708102227Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:55:23.708263 env[1438]: time="2025-05-17T00:55:23.708233728Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a22d5ae6056cff342767cc215563ffde795971968a302ac1920fc144d50cada8 pid=3127 runtime=io.containerd.runc.v2 May 17 00:55:23.726511 systemd[1]: Started cri-containerd-a22d5ae6056cff342767cc215563ffde795971968a302ac1920fc144d50cada8.scope. May 17 00:55:23.735857 env[1438]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} May 17 00:55:23.735857 env[1438]: delegateAdd: netconf sent to delegate plugin: May 17 00:55:23.756527 env[1438]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-17T00:55:23.756470785Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 May 17 00:55:23.756650 env[1438]: time="2025-05-17T00:55:23.756545086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 May 17 00:55:23.756650 env[1438]: time="2025-05-17T00:55:23.756575586Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 May 17 00:55:23.756776 env[1438]: time="2025-05-17T00:55:23.756740688Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/11750c387f122c6fac91048777fe15d77bd98ae3a72b51bb6d1016c92c690308 pid=3172 runtime=io.containerd.runc.v2 May 17 00:55:23.771674 systemd[1]: Started cri-containerd-11750c387f122c6fac91048777fe15d77bd98ae3a72b51bb6d1016c92c690308.scope. May 17 00:55:23.794268 env[1438]: time="2025-05-17T00:55:23.794216143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-hdc97,Uid:fea2abda-c81b-4643-a941-2d1eb0db1a8d,Namespace:kube-system,Attempt:0,} returns sandbox id \"a22d5ae6056cff342767cc215563ffde795971968a302ac1920fc144d50cada8\"" May 17 00:55:23.799583 env[1438]: time="2025-05-17T00:55:23.799549193Z" level=info msg="CreateContainer within sandbox \"a22d5ae6056cff342767cc215563ffde795971968a302ac1920fc144d50cada8\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:55:23.827183 env[1438]: time="2025-05-17T00:55:23.827143755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-nbbbc,Uid:acad8ee1-57d2-4df9-8e6f-eead6781fee8,Namespace:kube-system,Attempt:0,} returns sandbox id \"11750c387f122c6fac91048777fe15d77bd98ae3a72b51bb6d1016c92c690308\"" May 17 00:55:23.831210 env[1438]: time="2025-05-17T00:55:23.829639078Z" level=info msg="CreateContainer within sandbox \"11750c387f122c6fac91048777fe15d77bd98ae3a72b51bb6d1016c92c690308\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 17 00:55:23.850123 env[1438]: time="2025-05-17T00:55:23.850018771Z" level=info msg="CreateContainer within sandbox \"a22d5ae6056cff342767cc215563ffde795971968a302ac1920fc144d50cada8\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d47a3af56a5f27aaa55adee615ba41e0d45307aae93e00e68e060657da68d051\"" May 17 00:55:23.852299 env[1438]: time="2025-05-17T00:55:23.850950980Z" level=info msg="StartContainer for \"d47a3af56a5f27aaa55adee615ba41e0d45307aae93e00e68e060657da68d051\"" May 17 00:55:23.869690 systemd[1]: Started cri-containerd-d47a3af56a5f27aaa55adee615ba41e0d45307aae93e00e68e060657da68d051.scope. May 17 00:55:23.883528 env[1438]: time="2025-05-17T00:55:23.883484188Z" level=info msg="CreateContainer within sandbox \"11750c387f122c6fac91048777fe15d77bd98ae3a72b51bb6d1016c92c690308\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"87a94d994867680f11934a89611d67b9aea27330a49bbd571fbd82b1be02d4a0\"" May 17 00:55:23.884322 env[1438]: time="2025-05-17T00:55:23.884282696Z" level=info msg="StartContainer for \"87a94d994867680f11934a89611d67b9aea27330a49bbd571fbd82b1be02d4a0\"" May 17 00:55:23.906486 env[1438]: time="2025-05-17T00:55:23.906419806Z" level=info msg="StartContainer for \"d47a3af56a5f27aaa55adee615ba41e0d45307aae93e00e68e060657da68d051\" returns successfully" May 17 00:55:23.919682 systemd[1]: Started cri-containerd-87a94d994867680f11934a89611d67b9aea27330a49bbd571fbd82b1be02d4a0.scope. May 17 00:55:23.960534 env[1438]: time="2025-05-17T00:55:23.960491018Z" level=info msg="StartContainer for \"87a94d994867680f11934a89611d67b9aea27330a49bbd571fbd82b1be02d4a0\" returns successfully" May 17 00:55:24.653394 kubelet[2412]: I0517 00:55:24.653053 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-pmq4r" podStartSLOduration=19.398184657 podStartE2EDuration="25.653034351s" podCreationTimestamp="2025-05-17 00:54:59 +0000 UTC" firstStartedPulling="2025-05-17 00:55:01.475906328 +0000 UTC m=+6.587376450" lastFinishedPulling="2025-05-17 00:55:07.730756122 +0000 UTC m=+12.842226144" observedRunningTime="2025-05-17 00:55:09.615195726 +0000 UTC m=+14.726665848" watchObservedRunningTime="2025-05-17 00:55:24.653034351 +0000 UTC m=+29.764504373" May 17 00:55:24.671756 kubelet[2412]: I0517 00:55:24.671702 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-hdc97" podStartSLOduration=24.671684124 podStartE2EDuration="24.671684124s" podCreationTimestamp="2025-05-17 00:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:55:24.654312863 +0000 UTC m=+29.765782985" watchObservedRunningTime="2025-05-17 00:55:24.671684124 +0000 UTC m=+29.783154146" May 17 00:55:24.690673 kubelet[2412]: I0517 00:55:24.690618 2412 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-nbbbc" podStartSLOduration=24.6905865 podStartE2EDuration="24.6905865s" podCreationTimestamp="2025-05-17 00:55:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-17 00:55:24.690286997 +0000 UTC m=+29.801757119" watchObservedRunningTime="2025-05-17 00:55:24.6905865 +0000 UTC m=+29.802056622" May 17 00:55:25.440937 systemd-networkd[1586]: cni0: Gained IPv6LL May 17 00:55:25.441402 systemd-networkd[1586]: veth2eafb034: Gained IPv6LL May 17 00:55:25.568582 systemd-networkd[1586]: veth3c95da95: Gained IPv6LL May 17 00:57:01.143977 systemd[1]: Started sshd@5-10.200.4.13:22-10.200.16.10:38698.service. May 17 00:57:01.749855 sshd[3731]: Accepted publickey for core from 10.200.16.10 port 38698 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:01.751566 sshd[3731]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:01.757307 systemd[1]: Started session-8.scope. May 17 00:57:01.757874 systemd-logind[1424]: New session 8 of user core. May 17 00:57:02.294869 sshd[3731]: pam_unix(sshd:session): session closed for user core May 17 00:57:02.298356 systemd[1]: sshd@5-10.200.4.13:22-10.200.16.10:38698.service: Deactivated successfully. May 17 00:57:02.299495 systemd[1]: session-8.scope: Deactivated successfully. May 17 00:57:02.300404 systemd-logind[1424]: Session 8 logged out. Waiting for processes to exit. May 17 00:57:02.301411 systemd-logind[1424]: Removed session 8. May 17 00:57:07.397578 systemd[1]: Started sshd@6-10.200.4.13:22-10.200.16.10:38702.service. May 17 00:57:07.998854 sshd[3773]: Accepted publickey for core from 10.200.16.10 port 38702 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:08.000284 sshd[3773]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:08.005431 systemd[1]: Started session-9.scope. May 17 00:57:08.005878 systemd-logind[1424]: New session 9 of user core. May 17 00:57:08.480680 sshd[3773]: pam_unix(sshd:session): session closed for user core May 17 00:57:08.484007 systemd[1]: sshd@6-10.200.4.13:22-10.200.16.10:38702.service: Deactivated successfully. May 17 00:57:08.484997 systemd[1]: session-9.scope: Deactivated successfully. May 17 00:57:08.485664 systemd-logind[1424]: Session 9 logged out. Waiting for processes to exit. May 17 00:57:08.486466 systemd-logind[1424]: Removed session 9. May 17 00:57:13.582707 systemd[1]: Started sshd@7-10.200.4.13:22-10.200.16.10:44912.service. May 17 00:57:14.184523 sshd[3807]: Accepted publickey for core from 10.200.16.10 port 44912 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:14.185921 sshd[3807]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:14.190834 systemd-logind[1424]: New session 10 of user core. May 17 00:57:14.191325 systemd[1]: Started session-10.scope. May 17 00:57:14.680304 sshd[3807]: pam_unix(sshd:session): session closed for user core May 17 00:57:14.683772 systemd[1]: sshd@7-10.200.4.13:22-10.200.16.10:44912.service: Deactivated successfully. May 17 00:57:14.684874 systemd[1]: session-10.scope: Deactivated successfully. May 17 00:57:14.685727 systemd-logind[1424]: Session 10 logged out. Waiting for processes to exit. May 17 00:57:14.686585 systemd-logind[1424]: Removed session 10. May 17 00:57:14.780286 systemd[1]: Started sshd@8-10.200.4.13:22-10.200.16.10:44928.service. May 17 00:57:15.379359 sshd[3819]: Accepted publickey for core from 10.200.16.10 port 44928 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:15.381193 sshd[3819]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:15.386909 systemd-logind[1424]: New session 11 of user core. May 17 00:57:15.387886 systemd[1]: Started session-11.scope. May 17 00:57:15.915985 sshd[3819]: pam_unix(sshd:session): session closed for user core May 17 00:57:15.919053 systemd[1]: sshd@8-10.200.4.13:22-10.200.16.10:44928.service: Deactivated successfully. May 17 00:57:15.920034 systemd[1]: session-11.scope: Deactivated successfully. May 17 00:57:15.920692 systemd-logind[1424]: Session 11 logged out. Waiting for processes to exit. May 17 00:57:15.921494 systemd-logind[1424]: Removed session 11. May 17 00:57:16.017874 systemd[1]: Started sshd@9-10.200.4.13:22-10.200.16.10:44930.service. May 17 00:57:16.625465 sshd[3849]: Accepted publickey for core from 10.200.16.10 port 44930 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:16.627224 sshd[3849]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:16.633072 systemd[1]: Started session-12.scope. May 17 00:57:16.633530 systemd-logind[1424]: New session 12 of user core. May 17 00:57:17.123803 sshd[3849]: pam_unix(sshd:session): session closed for user core May 17 00:57:17.127872 systemd[1]: sshd@9-10.200.4.13:22-10.200.16.10:44930.service: Deactivated successfully. May 17 00:57:17.128933 systemd[1]: session-12.scope: Deactivated successfully. May 17 00:57:17.130435 systemd-logind[1424]: Session 12 logged out. Waiting for processes to exit. May 17 00:57:17.131439 systemd-logind[1424]: Removed session 12. May 17 00:57:22.226039 systemd[1]: Started sshd@10-10.200.4.13:22-10.200.16.10:52262.service. May 17 00:57:22.824104 sshd[3883]: Accepted publickey for core from 10.200.16.10 port 52262 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:22.825532 sshd[3883]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:22.830442 systemd[1]: Started session-13.scope. May 17 00:57:22.830885 systemd-logind[1424]: New session 13 of user core. May 17 00:57:23.322486 sshd[3883]: pam_unix(sshd:session): session closed for user core May 17 00:57:23.325889 systemd[1]: sshd@10-10.200.4.13:22-10.200.16.10:52262.service: Deactivated successfully. May 17 00:57:23.326984 systemd[1]: session-13.scope: Deactivated successfully. May 17 00:57:23.327704 systemd-logind[1424]: Session 13 logged out. Waiting for processes to exit. May 17 00:57:23.328576 systemd-logind[1424]: Removed session 13. May 17 00:57:28.423702 systemd[1]: Started sshd@11-10.200.4.13:22-10.200.16.10:52270.service. May 17 00:57:29.026143 sshd[3916]: Accepted publickey for core from 10.200.16.10 port 52270 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:29.027826 sshd[3916]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:29.033346 systemd[1]: Started session-14.scope. May 17 00:57:29.033797 systemd-logind[1424]: New session 14 of user core. May 17 00:57:29.507263 sshd[3916]: pam_unix(sshd:session): session closed for user core May 17 00:57:29.512838 systemd[1]: sshd@11-10.200.4.13:22-10.200.16.10:52270.service: Deactivated successfully. May 17 00:57:29.514427 systemd-logind[1424]: Session 14 logged out. Waiting for processes to exit. May 17 00:57:29.514446 systemd[1]: session-14.scope: Deactivated successfully. May 17 00:57:29.515637 systemd-logind[1424]: Removed session 14. May 17 00:57:34.608723 systemd[1]: Started sshd@12-10.200.4.13:22-10.200.16.10:59316.service. May 17 00:57:35.208506 sshd[3951]: Accepted publickey for core from 10.200.16.10 port 59316 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:35.210598 sshd[3951]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:35.216525 systemd[1]: Started session-15.scope. May 17 00:57:35.217277 systemd-logind[1424]: New session 15 of user core. May 17 00:57:35.706446 sshd[3951]: pam_unix(sshd:session): session closed for user core May 17 00:57:35.709632 systemd[1]: sshd@12-10.200.4.13:22-10.200.16.10:59316.service: Deactivated successfully. May 17 00:57:35.710567 systemd[1]: session-15.scope: Deactivated successfully. May 17 00:57:35.711234 systemd-logind[1424]: Session 15 logged out. Waiting for processes to exit. May 17 00:57:35.712071 systemd-logind[1424]: Removed session 15. May 17 00:57:40.809665 systemd[1]: Started sshd@13-10.200.4.13:22-10.200.16.10:55736.service. May 17 00:57:41.412228 sshd[4004]: Accepted publickey for core from 10.200.16.10 port 55736 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:41.413919 sshd[4004]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:41.418878 systemd[1]: Started session-16.scope. May 17 00:57:41.419505 systemd-logind[1424]: New session 16 of user core. May 17 00:57:41.892542 sshd[4004]: pam_unix(sshd:session): session closed for user core May 17 00:57:41.895628 systemd[1]: sshd@13-10.200.4.13:22-10.200.16.10:55736.service: Deactivated successfully. May 17 00:57:41.896571 systemd[1]: session-16.scope: Deactivated successfully. May 17 00:57:41.897383 systemd-logind[1424]: Session 16 logged out. Waiting for processes to exit. May 17 00:57:41.898197 systemd-logind[1424]: Removed session 16. May 17 00:57:41.993425 systemd[1]: Started sshd@14-10.200.4.13:22-10.200.16.10:55738.service. May 17 00:57:42.593236 sshd[4015]: Accepted publickey for core from 10.200.16.10 port 55738 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:42.594862 sshd[4015]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:42.599864 systemd[1]: Started session-17.scope. May 17 00:57:42.600685 systemd-logind[1424]: New session 17 of user core. May 17 00:57:43.121173 sshd[4015]: pam_unix(sshd:session): session closed for user core May 17 00:57:43.124605 systemd[1]: sshd@14-10.200.4.13:22-10.200.16.10:55738.service: Deactivated successfully. May 17 00:57:43.125656 systemd[1]: session-17.scope: Deactivated successfully. May 17 00:57:43.126630 systemd-logind[1424]: Session 17 logged out. Waiting for processes to exit. May 17 00:57:43.127758 systemd-logind[1424]: Removed session 17. May 17 00:57:43.225777 systemd[1]: Started sshd@15-10.200.4.13:22-10.200.16.10:55740.service. May 17 00:57:43.831199 sshd[4025]: Accepted publickey for core from 10.200.16.10 port 55740 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:43.832702 sshd[4025]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:43.837432 systemd-logind[1424]: New session 18 of user core. May 17 00:57:43.837667 systemd[1]: Started session-18.scope. May 17 00:57:45.632588 sshd[4025]: pam_unix(sshd:session): session closed for user core May 17 00:57:45.635614 systemd[1]: sshd@15-10.200.4.13:22-10.200.16.10:55740.service: Deactivated successfully. May 17 00:57:45.636990 systemd[1]: session-18.scope: Deactivated successfully. May 17 00:57:45.637050 systemd-logind[1424]: Session 18 logged out. Waiting for processes to exit. May 17 00:57:45.638285 systemd-logind[1424]: Removed session 18. May 17 00:57:45.734280 systemd[1]: Started sshd@16-10.200.4.13:22-10.200.16.10:55754.service. May 17 00:57:46.339226 sshd[4063]: Accepted publickey for core from 10.200.16.10 port 55754 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:46.340911 sshd[4063]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:46.346883 systemd[1]: Started session-19.scope. May 17 00:57:46.347637 systemd-logind[1424]: New session 19 of user core. May 17 00:57:46.937396 sshd[4063]: pam_unix(sshd:session): session closed for user core May 17 00:57:46.940792 systemd[1]: sshd@16-10.200.4.13:22-10.200.16.10:55754.service: Deactivated successfully. May 17 00:57:46.942163 systemd[1]: session-19.scope: Deactivated successfully. May 17 00:57:46.942203 systemd-logind[1424]: Session 19 logged out. Waiting for processes to exit. May 17 00:57:46.943469 systemd-logind[1424]: Removed session 19. May 17 00:57:47.038848 systemd[1]: Started sshd@17-10.200.4.13:22-10.200.16.10:55766.service. May 17 00:57:47.637923 sshd[4073]: Accepted publickey for core from 10.200.16.10 port 55766 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:47.639680 sshd[4073]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:47.644642 systemd[1]: Started session-20.scope. May 17 00:57:47.645120 systemd-logind[1424]: New session 20 of user core. May 17 00:57:48.132892 sshd[4073]: pam_unix(sshd:session): session closed for user core May 17 00:57:48.135604 systemd[1]: sshd@17-10.200.4.13:22-10.200.16.10:55766.service: Deactivated successfully. May 17 00:57:48.137014 systemd[1]: session-20.scope: Deactivated successfully. May 17 00:57:48.137062 systemd-logind[1424]: Session 20 logged out. Waiting for processes to exit. May 17 00:57:48.138278 systemd-logind[1424]: Removed session 20. May 17 00:57:53.242036 systemd[1]: Started sshd@18-10.200.4.13:22-10.200.16.10:54204.service. May 17 00:57:53.845896 sshd[4109]: Accepted publickey for core from 10.200.16.10 port 54204 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:57:53.847514 sshd[4109]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:57:53.852685 systemd[1]: Started session-21.scope. May 17 00:57:53.853141 systemd-logind[1424]: New session 21 of user core. May 17 00:57:54.326243 sshd[4109]: pam_unix(sshd:session): session closed for user core May 17 00:57:54.329341 systemd[1]: sshd@18-10.200.4.13:22-10.200.16.10:54204.service: Deactivated successfully. May 17 00:57:54.330198 systemd[1]: session-21.scope: Deactivated successfully. May 17 00:57:54.330877 systemd-logind[1424]: Session 21 logged out. Waiting for processes to exit. May 17 00:57:54.331699 systemd-logind[1424]: Removed session 21. May 17 00:57:59.426096 systemd[1]: Started sshd@19-10.200.4.13:22-10.200.16.10:48688.service. May 17 00:58:00.026837 sshd[4144]: Accepted publickey for core from 10.200.16.10 port 48688 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:58:00.028590 sshd[4144]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:58:00.034817 systemd-logind[1424]: New session 22 of user core. May 17 00:58:00.035566 systemd[1]: Started session-22.scope. May 17 00:58:00.504201 sshd[4144]: pam_unix(sshd:session): session closed for user core May 17 00:58:00.506945 systemd[1]: sshd@19-10.200.4.13:22-10.200.16.10:48688.service: Deactivated successfully. May 17 00:58:00.507924 systemd[1]: session-22.scope: Deactivated successfully. May 17 00:58:00.508680 systemd-logind[1424]: Session 22 logged out. Waiting for processes to exit. May 17 00:58:00.509554 systemd-logind[1424]: Removed session 22. May 17 00:58:05.608141 systemd[1]: Started sshd@20-10.200.4.13:22-10.200.16.10:48698.service. May 17 00:58:06.215931 sshd[4193]: Accepted publickey for core from 10.200.16.10 port 48698 ssh2: RSA SHA256:CX0PhS7HkvRYFXA6Rah+UZ6VVlhBI486MhBFeLvlfpc May 17 00:58:06.217489 sshd[4193]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) May 17 00:58:06.222584 systemd[1]: Started session-23.scope. May 17 00:58:06.223217 systemd-logind[1424]: New session 23 of user core. May 17 00:58:06.702198 sshd[4193]: pam_unix(sshd:session): session closed for user core May 17 00:58:06.705724 systemd[1]: sshd@20-10.200.4.13:22-10.200.16.10:48698.service: Deactivated successfully. May 17 00:58:06.706765 systemd[1]: session-23.scope: Deactivated successfully. May 17 00:58:06.707616 systemd-logind[1424]: Session 23 logged out. Waiting for processes to exit. May 17 00:58:06.708552 systemd-logind[1424]: Removed session 23.