Feb 9 19:00:17.043051 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:17.043077 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.043087 kernel: BIOS-provided physical RAM map: Feb 9 19:00:17.043094 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:17.043100 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:17.043105 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:17.043117 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:17.043123 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:17.043131 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:17.043138 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:17.043144 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:17.043152 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:17.043159 kernel: NX (Execute Disable) protection: active Feb 9 19:00:17.043165 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:17.043177 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:17.043183 kernel: random: crng init done Feb 9 19:00:17.043191 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:17.043199 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:17.043205 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:17.043215 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:17.043221 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:17.043228 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:17.043239 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:17.043246 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:17.043255 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:17.043261 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:17.043269 kernel: tsc: Detected 2593.906 MHz processor Feb 9 19:00:17.043278 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:17.043285 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:17.043294 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:17.043301 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:17.043310 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:17.043321 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:17.043328 kernel: Using GB pages for direct mapping Feb 9 19:00:17.043337 kernel: Secure boot disabled Feb 9 19:00:17.043345 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:17.043352 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:17.043361 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043368 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043377 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:17.043390 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:17.043400 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043407 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043415 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043423 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043431 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043442 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043449 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:17.043459 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:17.043466 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:17.043475 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:17.043483 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:17.043490 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:17.043500 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:17.043509 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:17.043519 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:17.043526 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:17.043535 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:17.043543 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:17.043550 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:17.043560 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:17.043567 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:17.043575 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:17.043585 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:17.043594 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:17.043602 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:17.043609 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:17.043619 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:17.043626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:17.043635 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:17.043643 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:17.043651 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:17.043662 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:17.043672 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:17.043680 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:17.043689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:17.043699 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:17.043707 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:17.043715 kernel: Zone ranges: Feb 9 19:00:17.043724 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:17.043731 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:17.043743 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:17.043750 kernel: Movable zone start for each node Feb 9 19:00:17.043759 kernel: Early memory node ranges Feb 9 19:00:17.043767 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:17.043774 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:17.043783 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:17.043790 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:17.043798 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:17.043807 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:17.043819 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:17.043828 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:17.043838 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:17.043848 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:17.043859 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:17.043867 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:17.043878 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:17.043886 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:17.043910 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:17.043920 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:17.043929 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:17.043938 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:17.043949 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:17.043957 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:17.043964 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:17.043973 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:17.043982 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:17.043991 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:17.044003 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:17.044012 kernel: Policy zone: Normal Feb 9 19:00:17.044022 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.044033 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:17.044042 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:17.044052 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:17.044062 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:17.044071 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:00:17.044083 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:17.044095 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:17.044118 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:17.044131 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:17.044143 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:17.044155 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:17.044164 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:17.044174 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:17.044185 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:17.044193 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:17.044203 kernel: Using NULL legacy PIC Feb 9 19:00:17.044214 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:17.044223 kernel: Console: colour dummy device 80x25 Feb 9 19:00:17.044233 kernel: printk: console [tty1] enabled Feb 9 19:00:17.044241 kernel: printk: console [ttyS0] enabled Feb 9 19:00:17.044251 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:17.044262 kernel: ACPI: Core revision 20210730 Feb 9 19:00:17.044270 kernel: Failed to register legacy timer interrupt Feb 9 19:00:17.044278 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:17.044288 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:17.044299 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 9 19:00:17.044306 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:17.044315 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:17.044324 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:17.044334 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:17.044341 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:17.044359 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:17.044367 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:17.044377 kernel: RETBleed: Vulnerable Feb 9 19:00:17.044384 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:17.044396 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:17.044404 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:17.044411 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:17.044418 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:17.044425 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:17.044432 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:17.044444 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:17.044451 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:17.044458 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:17.044465 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:17.044474 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:17.044482 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:17.044492 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:17.044500 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:17.044508 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:17.044518 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:17.044528 kernel: LSM: Security Framework initializing Feb 9 19:00:17.044535 kernel: SELinux: Initializing. Feb 9 19:00:17.044546 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.044555 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.044564 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:17.044573 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:17.044580 kernel: signal: max sigframe size: 3632 Feb 9 19:00:17.044591 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:17.044600 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:17.044609 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:17.044617 kernel: x86: Booting SMP configuration: Feb 9 19:00:17.044626 kernel: .... node #0, CPUs: #1 Feb 9 19:00:17.044639 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:17.044647 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:17.044658 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:17.044666 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:17.044676 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:17.044683 kernel: devtmpfs: initialized Feb 9 19:00:17.044694 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:17.044702 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:17.044714 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:17.044722 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:17.044731 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:17.044741 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:17.044749 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:17.044757 kernel: audit: type=2000 audit(1707505216.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:17.044766 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:17.044777 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:17.044784 kernel: cpuidle: using governor menu Feb 9 19:00:17.044796 kernel: ACPI: bus type PCI registered Feb 9 19:00:17.044805 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:17.044814 kernel: dca service started, version 1.12.1 Feb 9 19:00:17.044821 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:17.044831 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:17.044842 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:17.044850 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:17.044858 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:17.044867 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:17.044879 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:17.044886 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:17.044904 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:17.044914 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:17.044921 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:17.044932 kernel: ACPI: Interpreter enabled Feb 9 19:00:17.044940 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:17.044949 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:17.044957 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:17.044969 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:17.044979 kernel: iommu: Default domain type: Translated Feb 9 19:00:17.044987 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:17.044996 kernel: vgaarb: loaded Feb 9 19:00:17.045004 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:17.045014 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:17.045022 kernel: PTP clock support registered Feb 9 19:00:17.045031 kernel: Registered efivars operations Feb 9 19:00:17.045039 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:17.045049 kernel: PCI: System does not support PCI Feb 9 19:00:17.045059 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:17.045069 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:17.045078 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:17.045086 kernel: pnp: PnP ACPI init Feb 9 19:00:17.045095 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:17.045104 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:17.045115 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:17.045122 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:17.045134 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:17.045143 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:17.045152 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:17.045160 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:17.045170 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:17.045179 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.045187 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:17.045195 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:17.045205 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:17.045217 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:17.045225 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:17.045234 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:17.045243 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:17.045252 kernel: Initialise system trusted keyrings Feb 9 19:00:17.045260 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:17.045270 kernel: Key type asymmetric registered Feb 9 19:00:17.045279 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:17.045288 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:17.045298 kernel: io scheduler mq-deadline registered Feb 9 19:00:17.045307 kernel: io scheduler kyber registered Feb 9 19:00:17.045318 kernel: io scheduler bfq registered Feb 9 19:00:17.045325 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:17.045335 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:17.045343 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:17.045353 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:17.045361 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:17.045506 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:17.045602 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:16 UTC (1707505216) Feb 9 19:00:17.045686 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:17.045697 kernel: fail to initialize ptp_kvm Feb 9 19:00:17.045705 kernel: intel_pstate: CPU model not supported Feb 9 19:00:17.045712 kernel: efifb: probing for efifb Feb 9 19:00:17.045721 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:17.045730 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:17.045737 kernel: efifb: scrolling: redraw Feb 9 19:00:17.045746 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:17.045754 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:17.045761 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:17.045772 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:17.045782 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:17.045790 kernel: Segment Routing with IPv6 Feb 9 19:00:17.045798 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:17.045807 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:17.045818 kernel: Key type dns_resolver registered Feb 9 19:00:17.045828 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:17.045838 kernel: sched_clock: Marking stable (718712700, 23675300)->(921967500, -179579500) Feb 9 19:00:17.045847 kernel: registered taskstats version 1 Feb 9 19:00:17.045856 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:17.045864 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:17.045874 kernel: Key type .fscrypt registered Feb 9 19:00:17.045883 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:17.050929 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:17.050947 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:17.050958 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:17.050966 kernel: ima: No architecture policies found Feb 9 19:00:17.050975 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:17.050984 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:17.050992 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:17.050999 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:17.051010 kernel: Run /init as init process Feb 9 19:00:17.051017 kernel: with arguments: Feb 9 19:00:17.051025 kernel: /init Feb 9 19:00:17.051037 kernel: with environment: Feb 9 19:00:17.051047 kernel: HOME=/ Feb 9 19:00:17.051055 kernel: TERM=linux Feb 9 19:00:17.051063 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:17.051075 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:17.051086 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:17.051097 systemd[1]: Detected architecture x86-64. Feb 9 19:00:17.051108 systemd[1]: Running in initrd. Feb 9 19:00:17.051118 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:17.051126 systemd[1]: Hostname set to . Feb 9 19:00:17.051137 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:17.051147 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:17.051154 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:17.051162 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:17.051174 systemd[1]: Reached target paths.target. Feb 9 19:00:17.051182 systemd[1]: Reached target slices.target. Feb 9 19:00:17.051194 systemd[1]: Reached target swap.target. Feb 9 19:00:17.051202 systemd[1]: Reached target timers.target. Feb 9 19:00:17.051212 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:17.051222 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:17.051233 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:17.051241 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:17.051252 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:17.051262 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:17.051270 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:17.051279 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:17.051289 systemd[1]: Reached target sockets.target. Feb 9 19:00:17.051297 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:17.051305 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:17.051312 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:17.051323 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:17.051332 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:17.051344 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:17.051352 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:17.051360 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:17.051370 kernel: audit: type=1130 audit(1707505217.043:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.051384 systemd-journald[183]: Journal started Feb 9 19:00:17.051443 systemd-journald[183]: Runtime Journal (/run/log/journal/f0d7c07d72cf41d399f2831f23dfc77b) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:17.043000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.029967 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:17.061927 systemd[1]: Started systemd-journald.service. Feb 9 19:00:17.070358 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:17.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.083700 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:17.095260 kernel: audit: type=1130 audit(1707505217.069:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.095290 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:17.097090 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:17.099753 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:17.111906 kernel: Bridge firewalling registered Feb 9 19:00:17.115475 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:17.072000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.122676 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:17.152996 kernel: audit: type=1130 audit(1707505217.072:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.153030 kernel: audit: type=1130 audit(1707505217.085:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.085000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.134315 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:17.161169 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:17.200470 kernel: audit: type=1130 audit(1707505217.134:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.200505 kernel: audit: type=1130 audit(1707505217.138:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.134000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.161187 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:17.161234 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:17.217308 kernel: SCSI subsystem initialized Feb 9 19:00:17.164671 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:17.178384 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:17.192629 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:17.194487 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:17.194000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.236308 kernel: audit: type=1130 audit(1707505217.194:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.236348 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:17.242080 dracut-cmdline[202]: dracut-dracut-053 Feb 9 19:00:17.245965 dracut-cmdline[202]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:17.259951 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:17.259978 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:17.269356 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:17.271036 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:17.275000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.276491 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:17.289533 kernel: audit: type=1130 audit(1707505217.275:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.297878 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:17.302000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.313913 kernel: audit: type=1130 audit(1707505217.302:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.328912 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:17.341916 kernel: iscsi: registered transport (tcp) Feb 9 19:00:17.366306 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:17.366378 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:17.395496 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:17.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.399849 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:17.449918 kernel: raid6: avx512x4 gen() 18399 MB/s Feb 9 19:00:17.469905 kernel: raid6: avx512x4 xor() 7779 MB/s Feb 9 19:00:17.489903 kernel: raid6: avx512x2 gen() 18297 MB/s Feb 9 19:00:17.509907 kernel: raid6: avx512x2 xor() 28850 MB/s Feb 9 19:00:17.529902 kernel: raid6: avx512x1 gen() 18234 MB/s Feb 9 19:00:17.549902 kernel: raid6: avx512x1 xor() 25742 MB/s Feb 9 19:00:17.569904 kernel: raid6: avx2x4 gen() 18099 MB/s Feb 9 19:00:17.589904 kernel: raid6: avx2x4 xor() 7724 MB/s Feb 9 19:00:17.609907 kernel: raid6: avx2x2 gen() 18005 MB/s Feb 9 19:00:17.629905 kernel: raid6: avx2x2 xor() 21517 MB/s Feb 9 19:00:17.649905 kernel: raid6: avx2x1 gen() 13936 MB/s Feb 9 19:00:17.668904 kernel: raid6: avx2x1 xor() 18795 MB/s Feb 9 19:00:17.688907 kernel: raid6: sse2x4 gen() 11424 MB/s Feb 9 19:00:17.708901 kernel: raid6: sse2x4 xor() 7231 MB/s Feb 9 19:00:17.727903 kernel: raid6: sse2x2 gen() 12777 MB/s Feb 9 19:00:17.747906 kernel: raid6: sse2x2 xor() 7528 MB/s Feb 9 19:00:17.767904 kernel: raid6: sse2x1 gen() 11393 MB/s Feb 9 19:00:17.790711 kernel: raid6: sse2x1 xor() 5864 MB/s Feb 9 19:00:17.790741 kernel: raid6: using algorithm avx512x4 gen() 18399 MB/s Feb 9 19:00:17.790754 kernel: raid6: .... xor() 7779 MB/s, rmw enabled Feb 9 19:00:17.793980 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:17.812920 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:17.908918 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:17.917109 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:17.918000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.919000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:17.919000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:17.921173 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:17.935538 systemd-udevd[387]: Using default interface naming scheme 'v252'. Feb 9 19:00:17.942039 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:17.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.949929 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:17.964640 dracut-pre-trigger[399]: rd.md=0: removing MD RAID activation Feb 9 19:00:17.992260 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:17.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:17.995710 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:18.031425 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:18.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:18.075914 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:18.095914 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:18.105913 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:18.110752 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:18.115916 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:18.134914 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:18.142910 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:18.142964 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:18.144917 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:18.145906 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:18.145944 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:18.176927 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:18.186010 kernel: scsi host1: storvsc_host_t Feb 9 19:00:18.186221 kernel: scsi host0: storvsc_host_t Feb 9 19:00:18.193912 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:18.193985 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:18.223375 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:18.223652 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:18.224912 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:18.237539 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:18.237770 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:18.246504 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:18.246746 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:18.246925 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:18.255918 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:18.260913 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:18.275911 kernel: hv_netvsc 000d3ad8-fa42-000d-3ad8-fa42000d3ad8 eth0: VF slot 1 added Feb 9 19:00:18.288913 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:18.293905 kernel: hv_pci 9031be1b-915f-48e8-b829-55698422d823: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:18.304836 kernel: hv_pci 9031be1b-915f-48e8-b829-55698422d823: PCI host bridge to bus 915f:00 Feb 9 19:00:18.305045 kernel: pci_bus 915f:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:18.305182 kernel: pci_bus 915f:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:18.314162 kernel: pci 915f:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:18.322359 kernel: pci 915f:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:18.338358 kernel: pci 915f:00:02.0: enabling Extended Tags Feb 9 19:00:18.351019 kernel: pci 915f:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 915f:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:18.359390 kernel: pci_bus 915f:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:18.359585 kernel: pci 915f:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:18.450923 kernel: mlx5_core 915f:00:02.0: firmware version: 14.30.1350 Feb 9 19:00:18.610915 kernel: mlx5_core 915f:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:18.665741 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:18.734911 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (440) Feb 9 19:00:18.749161 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:18.764494 kernel: mlx5_core 915f:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:18.764689 kernel: mlx5_core 915f:00:02.0: mlx5e_tc_post_act_init:40:(pid 190): firmware level support is missing Feb 9 19:00:18.776313 kernel: hv_netvsc 000d3ad8-fa42-000d-3ad8-fa42000d3ad8 eth0: VF registering: eth1 Feb 9 19:00:18.776496 kernel: mlx5_core 915f:00:02.0 eth1: joined to eth0 Feb 9 19:00:18.788912 kernel: mlx5_core 915f:00:02.0 enP37215s1: renamed from eth1 Feb 9 19:00:18.920163 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:18.961189 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:18.966958 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:18.973802 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:18.985911 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:18.992911 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:20.004916 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:20.005573 disk-uuid[559]: The operation has completed successfully. Feb 9 19:00:20.078111 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:20.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.078217 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:20.093608 systemd[1]: Starting verity-setup.service... Feb 9 19:00:20.133966 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:20.373185 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:20.378820 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:20.382396 systemd[1]: Finished verity-setup.service. Feb 9 19:00:20.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.457930 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:20.457347 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:20.460612 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:20.464532 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:20.469553 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:20.484135 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:20.484181 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:20.484192 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:20.540846 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:20.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.544000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:20.546314 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:20.569390 systemd-networkd[800]: lo: Link UP Feb 9 19:00:20.569401 systemd-networkd[800]: lo: Gained carrier Feb 9 19:00:20.572093 systemd-networkd[800]: Enumeration completed Feb 9 19:00:20.573693 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:20.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.574050 systemd-networkd[800]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:20.577031 systemd[1]: Reached target network.target. Feb 9 19:00:20.592000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.581365 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:20.588738 systemd[1]: Started iscsiuio.service. Feb 9 19:00:20.602119 iscsid[809]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:20.602119 iscsid[809]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:00:20.602119 iscsid[809]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:20.602119 iscsid[809]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:20.602119 iscsid[809]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:20.602119 iscsid[809]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:20.602119 iscsid[809]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:20.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.594701 systemd[1]: Starting iscsid.service... Feb 9 19:00:20.604003 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:20.604458 systemd[1]: Started iscsid.service. Feb 9 19:00:20.615400 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:20.644351 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:20.648000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.648281 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:20.651857 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:20.655430 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:20.659775 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:20.668908 kernel: mlx5_core 915f:00:02.0 enP37215s1: Link up Feb 9 19:00:20.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.672531 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:20.679685 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:20.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.682553 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:20.737922 kernel: hv_netvsc 000d3ad8-fa42-000d-3ad8-fa42000d3ad8 eth0: Data path switched to VF: enP37215s1 Feb 9 19:00:20.742911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:20.743084 systemd-networkd[800]: enP37215s1: Link UP Feb 9 19:00:20.743230 systemd-networkd[800]: eth0: Link UP Feb 9 19:00:20.743438 systemd-networkd[800]: eth0: Gained carrier Feb 9 19:00:20.748598 systemd-networkd[800]: enP37215s1: Gained carrier Feb 9 19:00:20.777006 systemd-networkd[800]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:22.115128 systemd-networkd[800]: eth0: Gained IPv6LL Feb 9 19:00:23.870376 ignition[824]: Ignition 2.14.0 Feb 9 19:00:23.870393 ignition[824]: Stage: fetch-offline Feb 9 19:00:23.870490 ignition[824]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.870539 ignition[824]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.946951 ignition[824]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.947151 ignition[824]: parsed url from cmdline: "" Feb 9 19:00:23.947155 ignition[824]: no config URL provided Feb 9 19:00:23.947164 ignition[824]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:23.976348 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:23.976382 kernel: audit: type=1130 audit(1707505223.958:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.953719 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:23.947172 ignition[824]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:23.959190 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:23.947178 ignition[824]: failed to fetch config: resource requires networking Feb 9 19:00:23.948472 ignition[824]: Ignition finished successfully Feb 9 19:00:23.968452 ignition[830]: Ignition 2.14.0 Feb 9 19:00:23.968460 ignition[830]: Stage: fetch Feb 9 19:00:23.968570 ignition[830]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:23.968599 ignition[830]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:23.974903 ignition[830]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:23.975099 ignition[830]: parsed url from cmdline: "" Feb 9 19:00:23.975104 ignition[830]: no config URL provided Feb 9 19:00:23.975111 ignition[830]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:23.975124 ignition[830]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:23.975155 ignition[830]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:24.108105 ignition[830]: GET result: OK Feb 9 19:00:24.108231 ignition[830]: config has been read from IMDS userdata Feb 9 19:00:24.108291 ignition[830]: parsing config with SHA512: 7ea6956a2074c7261f8547db9d7794eb1b3328e6fe01b513231eb0b2eb49a8d304d21a2d7b435272f4b6b7ecd966190c9255aa99ef8b5b15a77a56058d5bf4b6 Feb 9 19:00:24.136202 unknown[830]: fetched base config from "system" Feb 9 19:00:24.136935 unknown[830]: fetched base config from "system" Feb 9 19:00:24.136944 unknown[830]: fetched user config from "azure" Feb 9 19:00:24.144000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.139352 ignition[830]: fetch: fetch complete Feb 9 19:00:24.159187 kernel: audit: type=1130 audit(1707505224.144:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.140857 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:24.139359 ignition[830]: fetch: fetch passed Feb 9 19:00:24.145956 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:24.139454 ignition[830]: Ignition finished successfully Feb 9 19:00:24.165759 ignition[836]: Ignition 2.14.0 Feb 9 19:00:24.165767 ignition[836]: Stage: kargs Feb 9 19:00:24.165899 ignition[836]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.165929 ignition[836]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.169275 ignition[836]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.197471 kernel: audit: type=1130 audit(1707505224.178:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.173720 ignition[836]: kargs: kargs passed Feb 9 19:00:24.176654 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:24.203000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.173791 ignition[836]: Ignition finished successfully Feb 9 19:00:24.220164 kernel: audit: type=1130 audit(1707505224.203:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.179604 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:24.193812 ignition[842]: Ignition 2.14.0 Feb 9 19:00:24.201737 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:24.193820 ignition[842]: Stage: disks Feb 9 19:00:24.203634 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:24.193971 ignition[842]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:24.216371 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:24.194004 ignition[842]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:24.220158 systemd[1]: Reached target local-fs.target. Feb 9 19:00:24.197958 ignition[842]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:24.223571 systemd[1]: Reached target sysinit.target. Feb 9 19:00:24.200476 ignition[842]: disks: disks passed Feb 9 19:00:24.226468 systemd[1]: Reached target basic.target. Feb 9 19:00:24.200535 ignition[842]: Ignition finished successfully Feb 9 19:00:24.246411 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:24.360310 systemd-fsck[850]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:24.369471 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:24.371000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.384056 kernel: audit: type=1130 audit(1707505224.371:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:24.384441 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:24.402918 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:24.403437 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:24.405319 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:24.447829 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:24.450767 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:24.453780 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:24.453816 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:24.460301 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:24.494493 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:24.497942 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:24.512915 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (861) Feb 9 19:00:24.512964 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:24.519284 initrd-setup-root[866]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:24.526678 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:24.526707 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:24.529196 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:24.548114 initrd-setup-root[892]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:24.552173 initrd-setup-root[900]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:24.556817 initrd-setup-root[908]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:25.060072 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:25.078363 kernel: audit: type=1130 audit(1707505225.061:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.063301 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:25.075139 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:25.083183 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:25.083293 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:25.105605 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:25.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.122562 ignition[927]: INFO : Ignition 2.14.0 Feb 9 19:00:25.122562 ignition[927]: INFO : Stage: mount Feb 9 19:00:25.122562 ignition[927]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:25.122562 ignition[927]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:25.133631 kernel: audit: type=1130 audit(1707505225.111:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.137150 ignition[927]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:25.142487 ignition[927]: INFO : mount: mount passed Feb 9 19:00:25.144253 ignition[927]: INFO : Ignition finished successfully Feb 9 19:00:25.146941 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:25.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.159907 kernel: audit: type=1130 audit(1707505225.147:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:25.910907 coreos-metadata[860]: Feb 09 19:00:25.910 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:25.930168 coreos-metadata[860]: Feb 09 19:00:25.930 INFO Fetch successful Feb 9 19:00:25.963712 coreos-metadata[860]: Feb 09 19:00:25.963 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:25.977703 coreos-metadata[860]: Feb 09 19:00:25.977 INFO Fetch successful Feb 9 19:00:25.996129 coreos-metadata[860]: Feb 09 19:00:25.996 INFO wrote hostname ci-3510.3.2-a-075ad2fc80 to /sysroot/etc/hostname Feb 9 19:00:26.002214 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:26.004000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.016448 systemd[1]: Starting ignition-files.service... Feb 9 19:00:26.020062 kernel: audit: type=1130 audit(1707505226.004:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:26.027511 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:26.046451 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (939) Feb 9 19:00:26.046504 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:26.046519 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:26.053157 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:26.058435 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:26.072001 ignition[958]: INFO : Ignition 2.14.0 Feb 9 19:00:26.072001 ignition[958]: INFO : Stage: files Feb 9 19:00:26.075819 ignition[958]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:26.075819 ignition[958]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:26.084029 ignition[958]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:26.094230 ignition[958]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:26.097300 ignition[958]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:26.097300 ignition[958]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:26.148639 ignition[958]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:26.152384 ignition[958]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:26.174170 unknown[958]: wrote ssh authorized keys file for user: core Feb 9 19:00:26.177077 ignition[958]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:26.180390 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:26.184644 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:26.944927 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:27.340485 ignition[958]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:00:27.347975 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:27.347975 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:27.347975 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:32.729756 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:32.838120 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:32.844158 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:32.844158 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:00:33.345291 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:00:33.521355 ignition[958]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:00:33.528783 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:33.528783 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:33.537032 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:00:33.738883 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:34.030262 ignition[958]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 33cf3f6e37bcee4dff7ce14ab933c605d07353d4e31446dd2b52c3f05e0b150b60e531f6069f112d8a76331322a72b593537531e62104cfc7c70cb03d46f76b3 Feb 9 19:00:34.036955 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:34.036955 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:34.036955 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:00:34.167153 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:00:34.447297 ignition[958]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:00:34.455092 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:34.455092 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:00:34.455092 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:00:34.582755 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:00:35.094748 ignition[958]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:00:35.104862 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:00:35.174446 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (961) Feb 9 19:00:35.125902 systemd[1]: mnt-oem3969267844.mount: Deactivated successfully. Feb 9 19:00:35.174000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3969267844" Feb 9 19:00:35.185548 ignition[958]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3969267844": device or resource busy Feb 9 19:00:35.185548 ignition[958]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3969267844", trying btrfs: device or resource busy Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3969267844" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3969267844" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem3969267844" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem3969267844" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3614768164" Feb 9 19:00:35.185548 ignition[958]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3614768164": device or resource busy Feb 9 19:00:35.185548 ignition[958]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3614768164", trying btrfs: device or resource busy Feb 9 19:00:35.185548 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3614768164" Feb 9 19:00:35.294987 kernel: audit: type=1130 audit(1707505235.174:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.295024 kernel: audit: type=1130 audit(1707505235.235:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.295042 kernel: audit: type=1131 audit(1707505235.235:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.295061 kernel: audit: type=1130 audit(1707505235.265:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.235000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.146943 systemd[1]: mnt-oem3614768164.mount: Deactivated successfully. Feb 9 19:00:35.297398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3614768164" Feb 9 19:00:35.297398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3614768164" Feb 9 19:00:35.297398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3614768164" Feb 9 19:00:35.297398 ignition[958]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(17): [started] processing unit "waagent.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(17): [finished] processing unit "waagent.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:00:35.297398 ignition[958]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:00:35.355376 kernel: audit: type=1130 audit(1707505235.309:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.355412 kernel: audit: type=1131 audit(1707505235.311:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.172995 systemd[1]: Finished ignition-files.service. Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(1f): [started] setting preset to enabled for "waagent.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(1f): [finished] setting preset to enabled for "waagent.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(20): [started] setting preset to enabled for "nvidia.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(20): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(23): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:00:35.355841 ignition[958]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:00:35.355841 ignition[958]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:00:35.355841 ignition[958]: INFO : files: files passed Feb 9 19:00:35.355841 ignition[958]: INFO : Ignition finished successfully Feb 9 19:00:35.218907 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:00:35.437303 initrd-setup-root-after-ignition[983]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:00:35.220986 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:00:35.222034 systemd[1]: Starting ignition-quench.service... Feb 9 19:00:35.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.227728 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:00:35.462789 kernel: audit: type=1130 audit(1707505235.447:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.227816 systemd[1]: Finished ignition-quench.service. Feb 9 19:00:35.262842 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:00:35.268065 systemd[1]: Reached target ignition-complete.target. Feb 9 19:00:35.284802 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:00:35.306007 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:00:35.306111 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:00:35.311430 systemd[1]: Reached target initrd-fs.target. Feb 9 19:00:35.336967 systemd[1]: Reached target initrd.target. Feb 9 19:00:35.338758 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:00:35.339699 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:00:35.490000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.446445 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:00:35.505410 kernel: audit: type=1131 audit(1707505235.490:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.466022 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:00:35.474316 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:00:35.478403 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:00:35.482023 systemd[1]: Stopped target timers.target. Feb 9 19:00:35.485540 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:00:35.485693 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:00:35.500408 systemd[1]: Stopped target initrd.target. Feb 9 19:00:35.505569 systemd[1]: Stopped target basic.target. Feb 9 19:00:35.509109 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:00:35.512507 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:00:35.516391 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:00:35.542000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.519811 systemd[1]: Stopped target remote-fs.target. Feb 9 19:00:35.556957 kernel: audit: type=1131 audit(1707505235.542:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.523815 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:00:35.527263 systemd[1]: Stopped target sysinit.target. Feb 9 19:00:35.560000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.530337 systemd[1]: Stopped target local-fs.target. Feb 9 19:00:35.576043 kernel: audit: type=1131 audit(1707505235.560:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.532093 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:00:35.575000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.535397 systemd[1]: Stopped target swap.target. Feb 9 19:00:35.579000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.538977 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:00:35.539134 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:00:35.594000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.597000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.552724 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:00:35.605000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.557032 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:00:35.608593 ignition[996]: INFO : Ignition 2.14.0 Feb 9 19:00:35.608593 ignition[996]: INFO : Stage: umount Feb 9 19:00:35.608593 ignition[996]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:35.608593 ignition[996]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:35.557191 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:00:35.622000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.622000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.626000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.626218 ignition[996]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:35.626218 ignition[996]: INFO : umount: umount passed Feb 9 19:00:35.626218 ignition[996]: INFO : Ignition finished successfully Feb 9 19:00:35.632000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.635000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.570723 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:00:35.570901 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:00:35.643000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.576093 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:00:35.576252 systemd[1]: Stopped ignition-files.service. Feb 9 19:00:35.579695 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:00:35.579849 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:00:35.584264 systemd[1]: Stopping ignition-mount.service... Feb 9 19:00:35.587591 systemd[1]: Stopping iscsiuio.service... Feb 9 19:00:35.590321 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:00:35.592933 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:00:35.593112 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:00:35.595422 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:00:35.595563 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:00:35.599700 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:00:35.599818 systemd[1]: Stopped iscsiuio.service. Feb 9 19:00:35.615264 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:00:35.615377 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:00:35.622885 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:00:35.623001 systemd[1]: Stopped ignition-mount.service. Feb 9 19:00:35.626961 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:00:35.627016 systemd[1]: Stopped ignition-disks.service. Feb 9 19:00:35.632281 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:00:35.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.632332 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:00:35.636094 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:00:35.636146 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:00:35.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.638066 systemd[1]: Stopped target network.target. Feb 9 19:00:35.639682 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:00:35.639737 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:00:35.646034 systemd[1]: Stopped target paths.target. Feb 9 19:00:35.714000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.647852 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:00:35.656130 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:00:35.678328 systemd[1]: Stopped target slices.target. Feb 9 19:00:35.681688 systemd[1]: Stopped target sockets.target. Feb 9 19:00:35.683360 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:00:35.683402 systemd[1]: Closed iscsid.socket. Feb 9 19:00:35.686830 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:00:35.686878 systemd[1]: Closed iscsiuio.socket. Feb 9 19:00:35.689930 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:00:35.689988 systemd[1]: Stopped ignition-setup.service. Feb 9 19:00:35.694054 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:00:35.697138 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:00:35.699960 systemd-networkd[800]: eth0: DHCPv6 lease lost Feb 9 19:00:35.700432 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:00:35.701332 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:00:35.701443 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:00:35.711236 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:00:35.711342 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:00:35.733502 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:00:35.744021 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:00:35.752000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.753404 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:00:35.755000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:00:35.755000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:00:35.753458 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:00:35.760000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.757251 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:00:35.757312 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:00:35.761703 systemd[1]: Stopping network-cleanup.service... Feb 9 19:00:35.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.766212 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:00:35.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.766278 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:00:35.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.770057 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:00:35.770112 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:00:35.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.774350 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:00:35.774401 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:00:35.778384 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:00:35.782291 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:00:35.782423 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:00:35.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.788600 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:00:35.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.806000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.788650 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:00:35.793203 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:00:35.812000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.793241 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:00:35.796979 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:00:35.797034 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:00:35.800540 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:00:35.800590 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:00:35.802346 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:00:35.802386 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:00:35.807422 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:00:35.835000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.835000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.810201 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:00:35.810267 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:00:35.812608 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:00:35.812659 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:00:35.814674 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:00:35.814726 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:00:35.822887 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:00:35.831765 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:00:35.862907 kernel: hv_netvsc 000d3ad8-fa42-000d-3ad8-fa42000d3ad8 eth0: Data path switched from VF: enP37215s1 Feb 9 19:00:35.885000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:35.883836 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:00:35.883980 systemd[1]: Stopped network-cleanup.service. Feb 9 19:00:35.886252 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:00:35.895283 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:00:35.906520 systemd[1]: Switching root. Feb 9 19:00:35.929806 iscsid[809]: iscsid shutting down. Feb 9 19:00:35.931825 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:00:35.931914 systemd-journald[183]: Journal stopped Feb 9 19:00:50.153404 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:00:50.153450 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:00:50.153470 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:00:50.153486 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:00:50.153502 kernel: SELinux: policy capability open_perms=1 Feb 9 19:00:50.153518 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:00:50.153536 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:00:50.153554 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:00:50.153571 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:00:50.153587 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:00:50.153604 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:00:50.153622 systemd[1]: Successfully loaded SELinux policy in 300.290ms. Feb 9 19:00:50.153645 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.276ms. Feb 9 19:00:50.153664 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:50.153689 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:50.153705 systemd[1]: Detected architecture x86-64. Feb 9 19:00:50.153723 systemd[1]: Detected first boot. Feb 9 19:00:50.153745 systemd[1]: Hostname set to . Feb 9 19:00:50.153762 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:50.153783 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:00:50.153798 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 9 19:00:50.153816 kernel: audit: type=1400 audit(1707505240.497:89): avc: denied { associate } for pid=1029 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:00:50.153835 kernel: audit: type=1300 audit(1707505240.497:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1012 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:50.153852 kernel: audit: type=1327 audit(1707505240.497:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:50.153872 kernel: audit: type=1400 audit(1707505240.506:90): avc: denied { associate } for pid=1029 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:00:50.153908 kernel: audit: type=1300 audit(1707505240.506:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1012 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:50.153925 kernel: audit: type=1307 audit(1707505240.506:90): cwd="/" Feb 9 19:00:50.153942 kernel: audit: type=1302 audit(1707505240.506:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:50.153959 kernel: audit: type=1302 audit(1707505240.506:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:50.153977 kernel: audit: type=1327 audit(1707505240.506:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:50.153996 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:00:50.154010 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:00:50.154024 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:00:50.154039 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:00:50.154053 kernel: audit: type=1334 audit(1707505249.619:91): prog-id=12 op=LOAD Feb 9 19:00:50.154068 kernel: audit: type=1334 audit(1707505249.619:92): prog-id=3 op=UNLOAD Feb 9 19:00:50.154083 kernel: audit: type=1334 audit(1707505249.624:93): prog-id=13 op=LOAD Feb 9 19:00:50.154095 kernel: audit: type=1334 audit(1707505249.629:94): prog-id=14 op=LOAD Feb 9 19:00:50.154113 kernel: audit: type=1334 audit(1707505249.629:95): prog-id=4 op=UNLOAD Feb 9 19:00:50.154129 kernel: audit: type=1334 audit(1707505249.629:96): prog-id=5 op=UNLOAD Feb 9 19:00:50.154147 kernel: audit: type=1334 audit(1707505249.638:97): prog-id=15 op=LOAD Feb 9 19:00:50.154163 kernel: audit: type=1334 audit(1707505249.638:98): prog-id=12 op=UNLOAD Feb 9 19:00:50.154177 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:00:50.154191 kernel: audit: type=1334 audit(1707505249.643:99): prog-id=16 op=LOAD Feb 9 19:00:50.154205 kernel: audit: type=1334 audit(1707505249.647:100): prog-id=17 op=LOAD Feb 9 19:00:50.154220 systemd[1]: Stopped iscsid.service. Feb 9 19:00:50.154239 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:00:50.154254 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:00:50.154266 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:00:50.154278 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:00:50.154290 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:00:50.154302 systemd[1]: Created slice system-getty.slice. Feb 9 19:00:50.154314 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:00:50.154326 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:00:50.154338 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:00:50.154353 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:00:50.154364 systemd[1]: Created slice user.slice. Feb 9 19:00:50.154377 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:50.154390 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:00:50.154400 systemd[1]: Set up automount boot.automount. Feb 9 19:00:50.154410 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:00:50.154419 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:00:50.154429 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:00:50.154443 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:00:50.154453 systemd[1]: Reached target integritysetup.target. Feb 9 19:00:50.154465 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:50.154477 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:50.154487 systemd[1]: Reached target slices.target. Feb 9 19:00:50.154500 systemd[1]: Reached target swap.target. Feb 9 19:00:50.154513 systemd[1]: Reached target torcx.target. Feb 9 19:00:50.154522 systemd[1]: Reached target veritysetup.target. Feb 9 19:00:50.154537 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:00:50.154550 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:00:50.154561 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:50.154573 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:50.154585 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:50.154601 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:00:50.154614 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:00:50.154625 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:00:50.154637 systemd[1]: Mounting media.mount... Feb 9 19:00:50.154650 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:00:50.154661 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:00:50.154673 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:00:50.154685 systemd[1]: Mounting tmp.mount... Feb 9 19:00:50.154697 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:00:50.154711 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:00:50.154722 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:50.154735 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:00:50.154745 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:00:50.154757 systemd[1]: Starting modprobe@drm.service... Feb 9 19:00:50.154767 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:00:50.154779 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:00:50.154791 systemd[1]: Starting modprobe@loop.service... Feb 9 19:00:50.154803 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:00:50.154818 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:00:50.154830 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:00:50.154842 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:00:50.154854 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:00:50.154864 systemd[1]: Stopped systemd-journald.service. Feb 9 19:00:50.154877 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:50.154900 kernel: loop: module loaded Feb 9 19:00:50.154913 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:50.154928 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:00:50.159052 kernel: fuse: init (API version 7.34) Feb 9 19:00:50.159098 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:00:50.159135 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:50.159151 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:00:50.159166 systemd[1]: Stopped verity-setup.service. Feb 9 19:00:50.159181 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:00:50.159194 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:00:50.159209 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:00:50.159228 systemd[1]: Mounted media.mount. Feb 9 19:00:50.159239 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:00:50.159253 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:00:50.159264 systemd[1]: Mounted tmp.mount. Feb 9 19:00:50.159276 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:00:50.159287 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:50.159300 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:00:50.159322 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:00:50.159333 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:00:50.159344 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:00:50.159357 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:00:50.159372 systemd[1]: Finished modprobe@drm.service. Feb 9 19:00:50.159385 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:00:50.159400 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:00:50.159413 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:00:50.159427 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:00:50.159447 systemd-journald[1126]: Journal started Feb 9 19:00:50.159563 systemd-journald[1126]: Runtime Journal (/run/log/journal/3445b41db4184aa0b022b67c3791b437) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:38.268000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:00:38.994000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:00:39.036000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:00:39.036000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:00:39.036000 audit: BPF prog-id=10 op=LOAD Feb 9 19:00:39.036000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:00:39.036000 audit: BPF prog-id=11 op=LOAD Feb 9 19:00:39.036000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:00:40.497000 audit[1029]: AVC avc: denied { associate } for pid=1029 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:00:40.497000 audit[1029]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1012 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:40.497000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:40.506000 audit[1029]: AVC avc: denied { associate } for pid=1029 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:00:40.506000 audit[1029]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1012 pid=1029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:40.506000 audit: CWD cwd="/" Feb 9 19:00:40.506000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:40.506000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:40.506000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:00:49.619000 audit: BPF prog-id=12 op=LOAD Feb 9 19:00:49.619000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:00:49.624000 audit: BPF prog-id=13 op=LOAD Feb 9 19:00:49.629000 audit: BPF prog-id=14 op=LOAD Feb 9 19:00:49.629000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:00:49.629000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:00:49.638000 audit: BPF prog-id=15 op=LOAD Feb 9 19:00:49.638000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:00:49.643000 audit: BPF prog-id=16 op=LOAD Feb 9 19:00:49.647000 audit: BPF prog-id=17 op=LOAD Feb 9 19:00:49.647000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:00:49.647000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:00:49.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.674000 audit: BPF prog-id=15 op=UNLOAD Feb 9 19:00:49.676000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.685000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.988000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.996000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.996000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.997000 audit: BPF prog-id=18 op=LOAD Feb 9 19:00:49.997000 audit: BPF prog-id=19 op=LOAD Feb 9 19:00:49.997000 audit: BPF prog-id=20 op=LOAD Feb 9 19:00:49.997000 audit: BPF prog-id=16 op=UNLOAD Feb 9 19:00:49.997000 audit: BPF prog-id=17 op=UNLOAD Feb 9 19:00:50.064000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.106000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.111000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.122000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.131000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.139000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.148000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.148000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.149000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:00:50.149000 audit[1126]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7fffe9052e50 a2=4000 a3=7fffe9052eec items=0 ppid=1 pid=1126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:50.149000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:00:50.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:49.619192 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:00:40.480015 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:00:49.654275 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:00:40.480618 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:00:40.480641 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:00:40.480679 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:00:40.480690 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:00:40.480743 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:00:40.480759 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:00:40.481337 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:00:40.481401 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:00:40.481418 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:00:40.481860 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:00:40.481914 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:00:40.481936 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:00:40.481954 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:00:40.481973 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:00:40.481989 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:40Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:00:48.454790 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:00:48.455036 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:00:48.455152 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:00:48.455335 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:00:48.455384 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:00:48.455438 /usr/lib/systemd/system-generators/torcx-generator[1029]: time="2024-02-09T19:00:48Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:00:50.167495 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:00:50.167549 systemd[1]: Finished modprobe@loop.service. Feb 9 19:00:50.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.168000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.173998 systemd[1]: Started systemd-journald.service. Feb 9 19:00:50.173000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.175243 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:00:50.177765 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:00:50.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.178000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.180430 systemd[1]: Reached target network-pre.target. Feb 9 19:00:50.183503 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:00:50.186789 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:00:50.188545 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:00:50.207163 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:00:50.210736 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:00:50.212765 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:00:50.214057 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:00:50.215813 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:00:50.217139 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:00:50.222862 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:50.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.225261 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:00:50.227308 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:00:50.232159 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:50.248941 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:00:50.251346 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:00:50.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.263216 systemd-journald[1126]: Time spent on flushing to /var/log/journal/3445b41db4184aa0b022b67c3791b437 is 37.704ms for 1198 entries. Feb 9 19:00:50.263216 systemd-journald[1126]: System Journal (/var/log/journal/3445b41db4184aa0b022b67c3791b437) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:00:50.354612 systemd-journald[1126]: Received client request to flush runtime journal. Feb 9 19:00:50.296000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.295017 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:50.355104 udevadm[1153]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:00:50.298759 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:00:50.331246 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:50.355786 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:00:50.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.978285 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:00:50.979000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:50.982197 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:52.098317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:52.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:52.223562 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:00:52.225000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:52.225000 audit: BPF prog-id=21 op=LOAD Feb 9 19:00:52.225000 audit: BPF prog-id=22 op=LOAD Feb 9 19:00:52.225000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:00:52.225000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:00:52.227378 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:52.245985 systemd-udevd[1157]: Using default interface naming scheme 'v252'. Feb 9 19:00:52.574559 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:52.575000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:52.577000 audit: BPF prog-id=23 op=LOAD Feb 9 19:00:52.578945 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:52.622369 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:00:52.692989 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:00:52.701919 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:00:52.710976 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:00:52.711095 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:00:52.718818 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:00:52.724887 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:52.725061 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:00:52.725103 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:00:52.725141 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:00:52.725173 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:00:52.720000 audit[1171]: AVC avc: denied { confidentiality } for pid=1171 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:00:52.725926 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:00:52.885021 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:00:52.885884 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:00:52.720000 audit[1171]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55794d7014a0 a1=f884 a2=7f43ae2fcbc5 a3=5 items=12 ppid=1157 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:52.720000 audit: CWD cwd="/" Feb 9 19:00:52.720000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=1 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=2 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=3 name=(null) inode=14208 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=4 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=5 name=(null) inode=14209 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=6 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=7 name=(null) inode=14210 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=8 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=9 name=(null) inode=14211 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=10 name=(null) inode=14207 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PATH item=11 name=(null) inode=14212 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:00:52.720000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:00:52.907000 audit: BPF prog-id=24 op=LOAD Feb 9 19:00:52.907000 audit: BPF prog-id=25 op=LOAD Feb 9 19:00:52.907000 audit: BPF prog-id=26 op=LOAD Feb 9 19:00:52.909060 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:00:52.950576 systemd[1]: Started systemd-userdbd.service. Feb 9 19:00:52.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:53.102790 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1159) Feb 9 19:00:53.169863 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:53.172494 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:00:53.220212 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:00:53.222000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:53.223965 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:00:53.321313 systemd-networkd[1166]: lo: Link UP Feb 9 19:00:53.321325 systemd-networkd[1166]: lo: Gained carrier Feb 9 19:00:53.321928 systemd-networkd[1166]: Enumeration completed Feb 9 19:00:53.322063 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:53.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:53.325631 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:00:53.349287 systemd-networkd[1166]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:53.402782 kernel: mlx5_core 915f:00:02.0 enP37215s1: Link up Feb 9 19:00:53.438779 kernel: hv_netvsc 000d3ad8-fa42-000d-3ad8-fa42000d3ad8 eth0: Data path switched to VF: enP37215s1 Feb 9 19:00:53.441180 systemd-networkd[1166]: enP37215s1: Link UP Feb 9 19:00:53.441482 systemd-networkd[1166]: eth0: Link UP Feb 9 19:00:53.441569 systemd-networkd[1166]: eth0: Gained carrier Feb 9 19:00:53.447555 systemd-networkd[1166]: enP37215s1: Gained carrier Feb 9 19:00:53.468901 systemd-networkd[1166]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:53.591892 lvm[1233]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:00:53.617895 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:00:53.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:53.620094 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:53.623224 systemd[1]: Starting lvm2-activation.service... Feb 9 19:00:53.627902 lvm[1235]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:00:53.648972 systemd[1]: Finished lvm2-activation.service. Feb 9 19:00:53.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:53.651326 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:53.653495 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:00:53.653533 systemd[1]: Reached target local-fs.target. Feb 9 19:00:53.655656 systemd[1]: Reached target machines.target. Feb 9 19:00:53.658819 systemd[1]: Starting ldconfig.service... Feb 9 19:00:53.660859 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:00:53.660977 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:00:53.662317 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:00:53.665478 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:00:53.668981 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:00:53.671020 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:00:53.671117 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:00:53.672555 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:00:53.699985 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:00:53.897179 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1237 (bootctl) Feb 9 19:00:53.898858 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:00:53.960327 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:00:54.027093 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:00:54.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:54.070112 systemd-tmpfiles[1240]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:00:55.224370 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:00:55.225095 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:00:55.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.229954 kernel: kauditd_printk_skb: 76 callbacks suppressed Feb 9 19:00:55.230019 kernel: audit: type=1130 audit(1707505255.226:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.353909 systemd-networkd[1166]: eth0: Gained IPv6LL Feb 9 19:00:55.357728 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:00:55.357000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.370781 kernel: audit: type=1130 audit(1707505255.357:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.625517 systemd-fsck[1245]: fsck.fat 4.2 (2021-01-31) Feb 9 19:00:55.625517 systemd-fsck[1245]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:00:55.627767 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:00:55.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.632866 systemd[1]: Mounting boot.mount... Feb 9 19:00:55.642025 kernel: audit: type=1130 audit(1707505255.629:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.649186 systemd[1]: Mounted boot.mount. Feb 9 19:00:55.663673 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:00:55.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.676772 kernel: audit: type=1130 audit(1707505255.665:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.847693 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:00:55.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.851631 systemd[1]: Starting audit-rules.service... Feb 9 19:00:55.869762 kernel: audit: type=1130 audit(1707505255.849:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.871259 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:00:55.875422 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:00:55.894885 kernel: audit: type=1334 audit(1707505255.878:165): prog-id=27 op=LOAD Feb 9 19:00:55.878000 audit: BPF prog-id=27 op=LOAD Feb 9 19:00:55.890570 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:55.895537 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:00:55.893000 audit: BPF prog-id=28 op=LOAD Feb 9 19:00:55.903908 kernel: audit: type=1334 audit(1707505255.893:166): prog-id=28 op=LOAD Feb 9 19:00:55.902852 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:00:55.927000 audit[1262]: SYSTEM_BOOT pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.932238 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:00:55.948035 kernel: audit: type=1127 audit(1707505255.927:167): pid=1262 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.962919 kernel: audit: type=1130 audit(1707505255.947:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.950305 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:00:55.965450 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:00:55.964000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:55.983829 kernel: audit: type=1130 audit(1707505255.964:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:56.008274 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:00:56.010000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:56.011144 systemd[1]: Reached target time-set.target. Feb 9 19:00:56.020048 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:00:56.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:56.087835 systemd-resolved[1255]: Positive Trust Anchors: Feb 9 19:00:56.087853 systemd-resolved[1255]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:56.087892 systemd-resolved[1255]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:56.181991 augenrules[1272]: No rules Feb 9 19:00:56.181000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:00:56.181000 audit[1272]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdc3f2e9f0 a2=420 a3=0 items=0 ppid=1251 pid=1272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:00:56.181000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:00:56.183394 systemd[1]: Finished audit-rules.service. Feb 9 19:00:56.230810 systemd-resolved[1255]: Using system hostname 'ci-3510.3.2-a-075ad2fc80'. Feb 9 19:00:56.232603 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:56.234774 systemd[1]: Reached target network.target. Feb 9 19:00:56.236674 systemd[1]: Reached target network-online.target. Feb 9 19:00:56.238601 systemd[1]: Reached target nss-lookup.target. Feb 9 19:01:02.226461 ldconfig[1236]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:01:02.236850 systemd[1]: Finished ldconfig.service. Feb 9 19:01:02.240318 systemd[1]: Starting systemd-update-done.service... Feb 9 19:01:02.248437 systemd[1]: Finished systemd-update-done.service. Feb 9 19:01:02.250539 systemd[1]: Reached target sysinit.target. Feb 9 19:01:02.252421 systemd[1]: Started motdgen.path. Feb 9 19:01:02.253989 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:01:02.256729 systemd[1]: Started logrotate.timer. Feb 9 19:01:02.258354 systemd[1]: Started mdadm.timer. Feb 9 19:01:02.259980 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:01:02.261819 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:01:02.261860 systemd[1]: Reached target paths.target. Feb 9 19:01:02.263637 systemd[1]: Reached target timers.target. Feb 9 19:01:02.265589 systemd[1]: Listening on dbus.socket. Feb 9 19:01:02.268194 systemd[1]: Starting docker.socket... Feb 9 19:01:02.290743 systemd[1]: Listening on sshd.socket. Feb 9 19:01:02.292809 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:01:02.293334 systemd[1]: Listening on docker.socket. Feb 9 19:01:02.295123 systemd[1]: Reached target sockets.target. Feb 9 19:01:02.296957 systemd[1]: Reached target basic.target. Feb 9 19:01:02.298646 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:01:02.298679 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:01:02.299842 systemd[1]: Starting containerd.service... Feb 9 19:01:02.303272 systemd[1]: Starting dbus.service... Feb 9 19:01:02.305873 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:01:02.308697 systemd[1]: Starting extend-filesystems.service... Feb 9 19:01:02.311146 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:01:02.312693 systemd[1]: Starting motdgen.service... Feb 9 19:01:02.315867 systemd[1]: Started nvidia.service. Feb 9 19:01:02.319516 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:01:02.323503 systemd[1]: Starting prepare-critools.service... Feb 9 19:01:02.326279 systemd[1]: Starting prepare-helm.service... Feb 9 19:01:02.329105 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:01:02.332338 systemd[1]: Starting sshd-keygen.service... Feb 9 19:01:02.339864 systemd[1]: Starting systemd-logind.service... Feb 9 19:01:02.342179 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:01:02.342259 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:01:02.342811 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:01:02.343675 systemd[1]: Starting update-engine.service... Feb 9 19:01:02.347302 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:01:02.359053 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:01:02.359291 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:01:02.423092 jq[1282]: false Feb 9 19:01:02.423564 jq[1299]: true Feb 9 19:01:02.424894 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:01:02.425117 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:01:02.441462 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:01:02.441646 systemd[1]: Finished motdgen.service. Feb 9 19:01:02.443256 jq[1307]: true Feb 9 19:01:02.459297 systemd-logind[1295]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:01:02.459935 systemd-logind[1295]: New seat seat0. Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda1 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda2 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda3 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found usr Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda4 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda6 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda7 Feb 9 19:01:02.487075 extend-filesystems[1283]: Found sda9 Feb 9 19:01:02.487075 extend-filesystems[1283]: Checking size of /dev/sda9 Feb 9 19:01:02.525638 env[1309]: time="2024-02-09T19:01:02.520603200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:01:02.576149 env[1309]: time="2024-02-09T19:01:02.576087900Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:01:02.576481 env[1309]: time="2024-02-09T19:01:02.576454200Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.586271 extend-filesystems[1283]: Old size kept for /dev/sda9 Feb 9 19:01:02.588920 extend-filesystems[1283]: Found sr0 Feb 9 19:01:02.594718 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:01:02.594936 systemd[1]: Finished extend-filesystems.service. Feb 9 19:01:02.596492 env[1309]: time="2024-02-09T19:01:02.596453400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:02.596870 env[1309]: time="2024-02-09T19:01:02.596845600Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.597494 env[1309]: time="2024-02-09T19:01:02.597465600Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:02.597596 env[1309]: time="2024-02-09T19:01:02.597581100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.597671 env[1309]: time="2024-02-09T19:01:02.597656600Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:01:02.597732 env[1309]: time="2024-02-09T19:01:02.597720100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.597916 env[1309]: time="2024-02-09T19:01:02.597900800Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.598243 env[1309]: time="2024-02-09T19:01:02.598224300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:01:02.598496 env[1309]: time="2024-02-09T19:01:02.598474400Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:01:02.598571 env[1309]: time="2024-02-09T19:01:02.598557100Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:01:02.598690 env[1309]: time="2024-02-09T19:01:02.598674800Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:01:02.598776 env[1309]: time="2024-02-09T19:01:02.598742600Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613126000Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613198100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613215900Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613258100Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613278000Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613296500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613314400Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613332700Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613350300Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613369100Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613386900Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613406100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613518400Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:01:02.614807 env[1309]: time="2024-02-09T19:01:02.613604500Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.613971300Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614006300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614026900Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614081200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614098900Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614123700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614142200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614160200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614177200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614193500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614210300Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614241300Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614381200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614399200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615376 env[1309]: time="2024-02-09T19:01:02.614415600Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.615902 env[1309]: time="2024-02-09T19:01:02.614431600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:01:02.615902 env[1309]: time="2024-02-09T19:01:02.614451900Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:01:02.615902 env[1309]: time="2024-02-09T19:01:02.614467500Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:01:02.615902 env[1309]: time="2024-02-09T19:01:02.614492100Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:01:02.615902 env[1309]: time="2024-02-09T19:01:02.614537200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:01:02.617155 env[1309]: time="2024-02-09T19:01:02.616485300Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:01:02.617155 env[1309]: time="2024-02-09T19:01:02.616584800Z" level=info msg="Connect containerd service" Feb 9 19:01:02.617155 env[1309]: time="2024-02-09T19:01:02.616630400Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.628135200Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.628445800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.628489800Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.628547500Z" level=info msg="containerd successfully booted in 0.109690s" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.635470100Z" level=info msg="Start subscribing containerd event" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.640368200Z" level=info msg="Start recovering state" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.640473800Z" level=info msg="Start event monitor" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.640494200Z" level=info msg="Start snapshots syncer" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.640512400Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:01:02.699018 env[1309]: time="2024-02-09T19:01:02.640527300Z" level=info msg="Start streaming server" Feb 9 19:01:02.699345 bash[1330]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:01:02.699473 tar[1304]: linux-amd64/helm Feb 9 19:01:02.699718 tar[1301]: ./ Feb 9 19:01:02.699718 tar[1301]: ./loopback Feb 9 19:01:02.628633 systemd[1]: Started containerd.service. Feb 9 19:01:02.700062 tar[1303]: crictl Feb 9 19:01:02.643006 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:01:02.723568 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:01:02.739190 dbus-daemon[1281]: [system] SELinux support is enabled Feb 9 19:01:02.739364 systemd[1]: Started dbus.service. Feb 9 19:01:02.743785 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:01:02.743821 systemd[1]: Reached target system-config.target. Feb 9 19:01:02.745742 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:01:02.745780 systemd[1]: Reached target user-config.target. Feb 9 19:01:02.749602 systemd[1]: Started systemd-logind.service. Feb 9 19:01:02.750914 dbus-daemon[1281]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:01:02.804931 tar[1301]: ./bandwidth Feb 9 19:01:02.940058 tar[1301]: ./ptp Feb 9 19:01:03.054991 tar[1301]: ./vlan Feb 9 19:01:03.142405 tar[1301]: ./host-device Feb 9 19:01:03.228935 tar[1301]: ./tuning Feb 9 19:01:03.309169 tar[1301]: ./vrf Feb 9 19:01:03.344481 update_engine[1298]: I0209 19:01:03.344161 1298 main.cc:92] Flatcar Update Engine starting Feb 9 19:01:03.364537 sshd_keygen[1305]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:01:03.390297 tar[1301]: ./sbr Feb 9 19:01:03.392342 systemd[1]: Started update-engine.service. Feb 9 19:01:03.397036 systemd[1]: Started locksmithd.service. Feb 9 19:01:03.400687 update_engine[1298]: I0209 19:01:03.393548 1298 update_check_scheduler.cc:74] Next update check in 7m27s Feb 9 19:01:03.433567 systemd[1]: Finished sshd-keygen.service. Feb 9 19:01:03.437786 systemd[1]: Starting issuegen.service... Feb 9 19:01:03.441275 systemd[1]: Started waagent.service. Feb 9 19:01:03.448922 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:01:03.449133 systemd[1]: Finished issuegen.service. Feb 9 19:01:03.453278 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:01:03.466681 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:01:03.470984 systemd[1]: Started getty@tty1.service. Feb 9 19:01:03.475205 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:01:03.479394 systemd[1]: Reached target getty.target. Feb 9 19:01:03.504473 tar[1301]: ./tap Feb 9 19:01:03.607556 tar[1301]: ./dhcp Feb 9 19:01:03.755004 systemd[1]: Finished prepare-critools.service. Feb 9 19:01:03.762373 tar[1304]: linux-amd64/LICENSE Feb 9 19:01:03.762786 tar[1304]: linux-amd64/README.md Feb 9 19:01:03.768519 systemd[1]: Finished prepare-helm.service. Feb 9 19:01:03.799803 tar[1301]: ./static Feb 9 19:01:03.824366 tar[1301]: ./firewall Feb 9 19:01:03.862195 tar[1301]: ./macvlan Feb 9 19:01:03.895736 tar[1301]: ./dummy Feb 9 19:01:03.929222 tar[1301]: ./bridge Feb 9 19:01:03.966856 tar[1301]: ./ipvlan Feb 9 19:01:04.000816 tar[1301]: ./portmap Feb 9 19:01:04.032594 tar[1301]: ./host-local Feb 9 19:01:04.105158 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:01:04.108163 systemd[1]: Reached target multi-user.target. Feb 9 19:01:04.112206 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:01:04.120558 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:01:04.120708 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:01:04.123111 systemd[1]: Startup finished in 1.044s (firmware) + 28.484s (loader) + 900ms (kernel) + 20.958s (initrd) + 26.284s (userspace) = 1min 17.673s. Feb 9 19:01:04.461717 login[1402]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:01:04.463354 login[1403]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:01:04.487445 systemd[1]: Created slice user-500.slice. Feb 9 19:01:04.488897 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:01:04.492055 systemd-logind[1295]: New session 1 of user core. Feb 9 19:01:04.498723 systemd-logind[1295]: New session 2 of user core. Feb 9 19:01:04.502489 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:01:04.504202 systemd[1]: Starting user@500.service... Feb 9 19:01:04.522912 (systemd)[1414]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:04.656881 systemd[1414]: Queued start job for default target default.target. Feb 9 19:01:04.658053 systemd[1414]: Reached target paths.target. Feb 9 19:01:04.658088 systemd[1414]: Reached target sockets.target. Feb 9 19:01:04.658104 systemd[1414]: Reached target timers.target. Feb 9 19:01:04.658120 systemd[1414]: Reached target basic.target. Feb 9 19:01:04.658246 systemd[1]: Started user@500.service. Feb 9 19:01:04.659189 systemd[1]: Started session-1.scope. Feb 9 19:01:04.659790 systemd[1]: Started session-2.scope. Feb 9 19:01:04.661526 systemd[1414]: Reached target default.target. Feb 9 19:01:04.661582 systemd[1414]: Startup finished in 131ms. Feb 9 19:01:05.147233 locksmithd[1393]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:01:10.677556 waagent[1397]: 2024-02-09T19:01:10.677414Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:01:10.681823 waagent[1397]: 2024-02-09T19:01:10.681717Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:01:10.683989 waagent[1397]: 2024-02-09T19:01:10.683911Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:01:10.686187 waagent[1397]: 2024-02-09T19:01:10.686105Z INFO Daemon Daemon Run daemon Feb 9 19:01:10.688586 waagent[1397]: 2024-02-09T19:01:10.688519Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:01:10.701109 waagent[1397]: 2024-02-09T19:01:10.700984Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:01:10.707569 waagent[1397]: 2024-02-09T19:01:10.707445Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.708987Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.709703Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.711069Z INFO Daemon Daemon Activate resource disk Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.711697Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.719440Z INFO Daemon Daemon Found device: None Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.720559Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.721347Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.722985Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:01:10.734524 waagent[1397]: 2024-02-09T19:01:10.723832Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:01:10.737189 waagent[1397]: 2024-02-09T19:01:10.737055Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:01:10.743707 waagent[1397]: 2024-02-09T19:01:10.743589Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:01:10.751435 waagent[1397]: 2024-02-09T19:01:10.745098Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:01:10.751435 waagent[1397]: 2024-02-09T19:01:10.745745Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:01:10.851999 waagent[1397]: 2024-02-09T19:01:10.851828Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:01:10.974251 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:01:10.996165 waagent[1397]: 2024-02-09T19:01:10.996025Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:01:11.009413 waagent[1397]: 2024-02-09T19:01:10.997432Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:01:11.009413 waagent[1397]: 2024-02-09T19:01:10.998328Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:01:11.009413 waagent[1397]: 2024-02-09T19:01:10.999048Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:01:11.009413 waagent[1397]: 2024-02-09T19:01:11.000023Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:01:11.009413 waagent[1397]: 2024-02-09T19:01:11.000610Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:01:11.126786 waagent[1397]: 2024-02-09T19:01:11.126690Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:01:11.134010 waagent[1397]: 2024-02-09T19:01:11.128489Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:01:11.134010 waagent[1397]: 2024-02-09T19:01:11.129131Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:01:11.661427 waagent[1397]: 2024-02-09T19:01:11.661272Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:01:11.670831 waagent[1397]: 2024-02-09T19:01:11.670735Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:01:11.675387 waagent[1397]: 2024-02-09T19:01:11.671878Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:01:11.756008 waagent[1397]: 2024-02-09T19:01:11.755875Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:01:11.760674 waagent[1397]: 2024-02-09T19:01:11.760587Z INFO Daemon Daemon Certificate with thumbprint 86346F16651F093284B179BD93DDDF21B9279004 has no matching private key. Feb 9 19:01:11.765620 waagent[1397]: 2024-02-09T19:01:11.765544Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:01:11.788774 waagent[1397]: 2024-02-09T19:01:11.788680Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 29860eae-de68-43c1-b370-838a219ae073 New eTag: 6186687817853407147] Feb 9 19:01:11.794029 waagent[1397]: 2024-02-09T19:01:11.793959Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:01:11.805483 waagent[1397]: 2024-02-09T19:01:11.805412Z INFO Daemon Daemon Starting provisioning Feb 9 19:01:11.807978 waagent[1397]: 2024-02-09T19:01:11.807907Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:01:11.810283 waagent[1397]: 2024-02-09T19:01:11.810222Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-075ad2fc80] Feb 9 19:01:11.832574 waagent[1397]: 2024-02-09T19:01:11.832420Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-075ad2fc80] Feb 9 19:01:11.835966 waagent[1397]: 2024-02-09T19:01:11.835872Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:01:11.838958 waagent[1397]: 2024-02-09T19:01:11.838897Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:01:11.852820 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:01:11.853074 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:01:11.853149 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:01:11.853504 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:01:11.858799 systemd-networkd[1166]: eth0: DHCPv6 lease lost Feb 9 19:01:11.859207 systemd-timesyncd[1261]: Network configuration changed, trying to establish connection. Feb 9 19:01:11.860683 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:01:11.860889 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:01:11.863410 systemd[1]: Starting systemd-networkd.service... Feb 9 19:01:11.894469 systemd-networkd[1456]: enP37215s1: Link UP Feb 9 19:01:11.894480 systemd-networkd[1456]: enP37215s1: Gained carrier Feb 9 19:01:11.896212 systemd-networkd[1456]: eth0: Link UP Feb 9 19:01:11.896222 systemd-networkd[1456]: eth0: Gained carrier Feb 9 19:01:11.896663 systemd-networkd[1456]: lo: Link UP Feb 9 19:01:11.896672 systemd-networkd[1456]: lo: Gained carrier Feb 9 19:01:11.897010 systemd-networkd[1456]: eth0: Gained IPv6LL Feb 9 19:01:11.897290 systemd-networkd[1456]: Enumeration completed Feb 9 19:01:11.897415 systemd[1]: Started systemd-networkd.service. Feb 9 19:01:11.899615 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:01:11.901888 systemd-networkd[1456]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:01:11.904313 waagent[1397]: 2024-02-09T19:01:11.904126Z INFO Daemon Daemon Create user account if not exists Feb 9 19:01:11.908257 waagent[1397]: 2024-02-09T19:01:11.908169Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:01:11.909491 waagent[1397]: 2024-02-09T19:01:11.909423Z INFO Daemon Daemon Configure sudoer Feb 9 19:01:11.910746 waagent[1397]: 2024-02-09T19:01:11.910684Z INFO Daemon Daemon Configure sshd Feb 9 19:01:11.911963 waagent[1397]: 2024-02-09T19:01:11.911880Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:01:11.939890 systemd-networkd[1456]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:01:11.940272 systemd-timesyncd[1261]: Network configuration changed, trying to establish connection. Feb 9 19:01:11.941953 systemd-timesyncd[1261]: Network configuration changed, trying to establish connection. Feb 9 19:01:11.942602 systemd-timesyncd[1261]: Network configuration changed, trying to establish connection. Feb 9 19:01:11.944368 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:01:13.168669 waagent[1397]: 2024-02-09T19:01:13.168567Z INFO Daemon Daemon Provisioning complete Feb 9 19:01:13.187256 waagent[1397]: 2024-02-09T19:01:13.187162Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:01:13.193582 waagent[1397]: 2024-02-09T19:01:13.188468Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:01:13.193582 waagent[1397]: 2024-02-09T19:01:13.190104Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:01:13.463009 waagent[1465]: 2024-02-09T19:01:13.462832Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:01:13.463734 waagent[1465]: 2024-02-09T19:01:13.463664Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:13.463910 waagent[1465]: 2024-02-09T19:01:13.463849Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:13.474894 waagent[1465]: 2024-02-09T19:01:13.474809Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:01:13.475081 waagent[1465]: 2024-02-09T19:01:13.475023Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:01:13.537969 waagent[1465]: 2024-02-09T19:01:13.537821Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:01:13.538211 waagent[1465]: 2024-02-09T19:01:13.538146Z INFO ExtHandler ExtHandler Certificate with thumbprint 86346F16651F093284B179BD93DDDF21B9279004 has no matching private key. Feb 9 19:01:13.538445 waagent[1465]: 2024-02-09T19:01:13.538395Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:01:13.552204 waagent[1465]: 2024-02-09T19:01:13.552136Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: de3f64fa-9678-42b2-a84a-11def4f9375b New eTag: 6186687817853407147] Feb 9 19:01:13.552851 waagent[1465]: 2024-02-09T19:01:13.552788Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:01:13.624464 waagent[1465]: 2024-02-09T19:01:13.624311Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:01:13.647186 waagent[1465]: 2024-02-09T19:01:13.647088Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1465 Feb 9 19:01:13.650672 waagent[1465]: 2024-02-09T19:01:13.650583Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:01:13.651947 waagent[1465]: 2024-02-09T19:01:13.651881Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:01:13.739185 waagent[1465]: 2024-02-09T19:01:13.739060Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:01:13.739547 waagent[1465]: 2024-02-09T19:01:13.739476Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:01:13.747658 waagent[1465]: 2024-02-09T19:01:13.747593Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:01:13.748185 waagent[1465]: 2024-02-09T19:01:13.748117Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:01:13.749269 waagent[1465]: 2024-02-09T19:01:13.749204Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:01:13.750558 waagent[1465]: 2024-02-09T19:01:13.750497Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:01:13.751007 waagent[1465]: 2024-02-09T19:01:13.750949Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:13.751161 waagent[1465]: 2024-02-09T19:01:13.751112Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:13.751686 waagent[1465]: 2024-02-09T19:01:13.751629Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:01:13.751989 waagent[1465]: 2024-02-09T19:01:13.751932Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:01:13.751989 waagent[1465]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:01:13.751989 waagent[1465]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:01:13.751989 waagent[1465]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:01:13.751989 waagent[1465]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:13.751989 waagent[1465]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:13.751989 waagent[1465]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:13.754845 waagent[1465]: 2024-02-09T19:01:13.754738Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:13.755006 waagent[1465]: 2024-02-09T19:01:13.754908Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:01:13.755310 waagent[1465]: 2024-02-09T19:01:13.755254Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:13.755738 waagent[1465]: 2024-02-09T19:01:13.755682Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:01:13.755918 waagent[1465]: 2024-02-09T19:01:13.755868Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:01:13.756048 waagent[1465]: 2024-02-09T19:01:13.756003Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:01:13.756918 waagent[1465]: 2024-02-09T19:01:13.756858Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:01:13.756999 waagent[1465]: 2024-02-09T19:01:13.756941Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:01:13.757822 waagent[1465]: 2024-02-09T19:01:13.757745Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:01:13.758056 waagent[1465]: 2024-02-09T19:01:13.758004Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:01:13.759013 waagent[1465]: 2024-02-09T19:01:13.758968Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:01:13.767091 waagent[1465]: 2024-02-09T19:01:13.767036Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:01:13.768452 waagent[1465]: 2024-02-09T19:01:13.768402Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:01:13.769421 waagent[1465]: 2024-02-09T19:01:13.769365Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:01:13.785148 waagent[1465]: 2024-02-09T19:01:13.785083Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1456' Feb 9 19:01:13.818975 waagent[1465]: 2024-02-09T19:01:13.818884Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:01:13.878705 waagent[1465]: 2024-02-09T19:01:13.878553Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:01:13.878705 waagent[1465]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:01:13.878705 waagent[1465]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:01:13.878705 waagent[1465]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:fa:42 brd ff:ff:ff:ff:ff:ff Feb 9 19:01:13.878705 waagent[1465]: 3: enP37215s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:fa:42 brd ff:ff:ff:ff:ff:ff\ altname enP37215p0s2 Feb 9 19:01:13.878705 waagent[1465]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:01:13.878705 waagent[1465]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:01:13.878705 waagent[1465]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:01:13.878705 waagent[1465]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:01:13.878705 waagent[1465]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:01:13.878705 waagent[1465]: 2: eth0 inet6 fe80::20d:3aff:fed8:fa42/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:01:14.175022 waagent[1465]: 2024-02-09T19:01:14.174943Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:01:15.194232 waagent[1397]: 2024-02-09T19:01:15.194055Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:01:15.200592 waagent[1397]: 2024-02-09T19:01:15.200512Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:01:16.198060 waagent[1503]: 2024-02-09T19:01:16.197939Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:01:16.198830 waagent[1503]: 2024-02-09T19:01:16.198743Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:01:16.198992 waagent[1503]: 2024-02-09T19:01:16.198937Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:01:16.208774 waagent[1503]: 2024-02-09T19:01:16.208642Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:01:16.209218 waagent[1503]: 2024-02-09T19:01:16.209152Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:16.209392 waagent[1503]: 2024-02-09T19:01:16.209339Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:16.220975 waagent[1503]: 2024-02-09T19:01:16.220891Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:01:16.232777 waagent[1503]: 2024-02-09T19:01:16.232687Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:01:16.233816 waagent[1503]: 2024-02-09T19:01:16.233735Z INFO ExtHandler Feb 9 19:01:16.233984 waagent[1503]: 2024-02-09T19:01:16.233929Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: d647ad27-13af-4e63-98ba-60272286c9b3 eTag: 6186687817853407147 source: Fabric] Feb 9 19:01:16.234679 waagent[1503]: 2024-02-09T19:01:16.234621Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:01:16.235773 waagent[1503]: 2024-02-09T19:01:16.235706Z INFO ExtHandler Feb 9 19:01:16.235923 waagent[1503]: 2024-02-09T19:01:16.235871Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:01:16.242501 waagent[1503]: 2024-02-09T19:01:16.242447Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:01:16.242973 waagent[1503]: 2024-02-09T19:01:16.242924Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:01:16.262873 waagent[1503]: 2024-02-09T19:01:16.262799Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:01:16.330154 waagent[1503]: 2024-02-09T19:01:16.330013Z INFO ExtHandler Downloaded certificate {'thumbprint': '86346F16651F093284B179BD93DDDF21B9279004', 'hasPrivateKey': False} Feb 9 19:01:16.331196 waagent[1503]: 2024-02-09T19:01:16.331127Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:01:16.332201 waagent[1503]: 2024-02-09T19:01:16.332142Z INFO ExtHandler Fetch goal state completed Feb 9 19:01:16.356029 waagent[1503]: 2024-02-09T19:01:16.355943Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1503 Feb 9 19:01:16.359303 waagent[1503]: 2024-02-09T19:01:16.359231Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:01:16.360742 waagent[1503]: 2024-02-09T19:01:16.360683Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:01:16.365791 waagent[1503]: 2024-02-09T19:01:16.365723Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:01:16.366163 waagent[1503]: 2024-02-09T19:01:16.366106Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:01:16.374298 waagent[1503]: 2024-02-09T19:01:16.374239Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:01:16.374816 waagent[1503]: 2024-02-09T19:01:16.374741Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:01:16.381307 waagent[1503]: 2024-02-09T19:01:16.381199Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:01:16.386029 waagent[1503]: 2024-02-09T19:01:16.385968Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:01:16.387464 waagent[1503]: 2024-02-09T19:01:16.387403Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:01:16.388343 waagent[1503]: 2024-02-09T19:01:16.388283Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:01:16.388567 waagent[1503]: 2024-02-09T19:01:16.388508Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:16.389085 waagent[1503]: 2024-02-09T19:01:16.389026Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:01:16.389235 waagent[1503]: 2024-02-09T19:01:16.389161Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:16.389298 waagent[1503]: 2024-02-09T19:01:16.389252Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:01:16.389556 waagent[1503]: 2024-02-09T19:01:16.389504Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:01:16.389648 waagent[1503]: 2024-02-09T19:01:16.389596Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:01:16.390381 waagent[1503]: 2024-02-09T19:01:16.390325Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:01:16.390792 waagent[1503]: 2024-02-09T19:01:16.390715Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:01:16.391963 waagent[1503]: 2024-02-09T19:01:16.391905Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:01:16.392126 waagent[1503]: 2024-02-09T19:01:16.392078Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:01:16.392373 waagent[1503]: 2024-02-09T19:01:16.392320Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:01:16.392575 waagent[1503]: 2024-02-09T19:01:16.392503Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:01:16.392575 waagent[1503]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:01:16.392575 waagent[1503]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:01:16.392575 waagent[1503]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:01:16.392575 waagent[1503]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:16.392575 waagent[1503]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:16.392575 waagent[1503]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:01:16.393310 waagent[1503]: 2024-02-09T19:01:16.393256Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:01:16.398590 waagent[1503]: 2024-02-09T19:01:16.398530Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:01:16.413378 waagent[1503]: 2024-02-09T19:01:16.413299Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:01:16.414577 waagent[1503]: 2024-02-09T19:01:16.414505Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:01:16.414577 waagent[1503]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:01:16.414577 waagent[1503]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:01:16.414577 waagent[1503]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:fa:42 brd ff:ff:ff:ff:ff:ff Feb 9 19:01:16.414577 waagent[1503]: 3: enP37215s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d8:fa:42 brd ff:ff:ff:ff:ff:ff\ altname enP37215p0s2 Feb 9 19:01:16.414577 waagent[1503]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:01:16.414577 waagent[1503]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:01:16.414577 waagent[1503]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:01:16.414577 waagent[1503]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:01:16.414577 waagent[1503]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:01:16.414577 waagent[1503]: 2: eth0 inet6 fe80::20d:3aff:fed8:fa42/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:01:16.420678 waagent[1503]: 2024-02-09T19:01:16.420567Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:01:16.519152 waagent[1503]: 2024-02-09T19:01:16.519073Z INFO ExtHandler ExtHandler Feb 9 19:01:16.519594 waagent[1503]: 2024-02-09T19:01:16.519515Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 53cb06c5-64ba-4f4a-9a6e-832fd329584f correlation 53e2dcf8-5581-4dff-828a-902077be165c created: 2024-02-09T18:59:35.827698Z] Feb 9 19:01:16.523353 waagent[1503]: 2024-02-09T19:01:16.523268Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:01:16.525737 waagent[1503]: 2024-02-09T19:01:16.525677Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 6 ms] Feb 9 19:01:16.538735 waagent[1503]: 2024-02-09T19:01:16.538629Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:01:16.538735 waagent[1503]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.538735 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.538735 waagent[1503]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.538735 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.538735 waagent[1503]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.538735 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.538735 waagent[1503]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:01:16.538735 waagent[1503]: 8 1794 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:01:16.538735 waagent[1503]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:01:16.547865 waagent[1503]: 2024-02-09T19:01:16.547741Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:01:16.547865 waagent[1503]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.547865 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.547865 waagent[1503]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.547865 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.547865 waagent[1503]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:01:16.547865 waagent[1503]: pkts bytes target prot opt in out source destination Feb 9 19:01:16.547865 waagent[1503]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:01:16.547865 waagent[1503]: 13 4439 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:01:16.547865 waagent[1503]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:01:16.548436 waagent[1503]: 2024-02-09T19:01:16.548381Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:01:16.554117 waagent[1503]: 2024-02-09T19:01:16.554051Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:01:16.563690 waagent[1503]: 2024-02-09T19:01:16.563611Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 88D5806A-5CC9-4688-883C-81CCD63D2595;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:01:41.030583 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:01:42.049538 systemd-timesyncd[1261]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:01:42.049618 systemd-timesyncd[1261]: Initial clock synchronization to Fri 2024-02-09 19:01:42.049663 UTC. Feb 9 19:01:43.136788 systemd[1]: Created slice system-sshd.slice. Feb 9 19:01:43.138857 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.12.6:41450.service. Feb 9 19:01:44.028005 sshd[1548]: Accepted publickey for core from 10.200.12.6 port 41450 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:01:44.029674 sshd[1548]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:44.033813 systemd-logind[1295]: New session 3 of user core. Feb 9 19:01:44.035044 systemd[1]: Started session-3.scope. Feb 9 19:01:44.560340 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.12.6:41456.service. Feb 9 19:01:45.177210 sshd[1553]: Accepted publickey for core from 10.200.12.6 port 41456 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:01:45.178886 sshd[1553]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:45.184382 systemd[1]: Started session-4.scope. Feb 9 19:01:45.185039 systemd-logind[1295]: New session 4 of user core. Feb 9 19:01:45.613204 sshd[1553]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:45.616527 systemd[1]: sshd@1-10.200.8.10:22-10.200.12.6:41456.service: Deactivated successfully. Feb 9 19:01:45.617541 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:01:45.618285 systemd-logind[1295]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:01:45.619198 systemd-logind[1295]: Removed session 4. Feb 9 19:01:45.717576 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.12.6:41464.service. Feb 9 19:01:46.338240 sshd[1559]: Accepted publickey for core from 10.200.12.6 port 41464 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:01:46.362987 sshd[1559]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:46.367845 systemd-logind[1295]: New session 5 of user core. Feb 9 19:01:46.367999 systemd[1]: Started session-5.scope. Feb 9 19:01:46.772280 sshd[1559]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:46.775550 systemd[1]: sshd@2-10.200.8.10:22-10.200.12.6:41464.service: Deactivated successfully. Feb 9 19:01:46.776545 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:01:46.777333 systemd-logind[1295]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:01:46.778259 systemd-logind[1295]: Removed session 5. Feb 9 19:01:46.876161 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.12.6:41474.service. Feb 9 19:01:47.497715 sshd[1565]: Accepted publickey for core from 10.200.12.6 port 41474 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:01:47.499123 sshd[1565]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:47.504726 systemd[1]: Started session-6.scope. Feb 9 19:01:47.505369 systemd-logind[1295]: New session 6 of user core. Feb 9 19:01:47.934708 sshd[1565]: pam_unix(sshd:session): session closed for user core Feb 9 19:01:47.937934 systemd[1]: sshd@3-10.200.8.10:22-10.200.12.6:41474.service: Deactivated successfully. Feb 9 19:01:47.939300 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:01:47.939542 systemd-logind[1295]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:01:47.940690 systemd-logind[1295]: Removed session 6. Feb 9 19:01:48.039075 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.12.6:43880.service. Feb 9 19:01:48.656982 sshd[1571]: Accepted publickey for core from 10.200.12.6 port 43880 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:01:48.658613 sshd[1571]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:01:48.663324 systemd[1]: Started session-7.scope. Feb 9 19:01:48.663946 systemd-logind[1295]: New session 7 of user core. Feb 9 19:01:48.980804 update_engine[1298]: I0209 19:01:48.980156 1298 update_attempter.cc:509] Updating boot flags... Feb 9 19:01:49.296376 sudo[1578]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:01:49.296721 sudo[1578]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:01:50.444703 systemd[1]: Starting docker.service... Feb 9 19:01:50.496597 env[1655]: time="2024-02-09T19:01:50.496527602Z" level=info msg="Starting up" Feb 9 19:01:50.497875 env[1655]: time="2024-02-09T19:01:50.497841617Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:01:50.498000 env[1655]: time="2024-02-09T19:01:50.497937518Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:01:50.498000 env[1655]: time="2024-02-09T19:01:50.497969319Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:01:50.498000 env[1655]: time="2024-02-09T19:01:50.497983119Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:01:50.499667 env[1655]: time="2024-02-09T19:01:50.499636338Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:01:50.499667 env[1655]: time="2024-02-09T19:01:50.499655938Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:01:50.499843 env[1655]: time="2024-02-09T19:01:50.499672739Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:01:50.499843 env[1655]: time="2024-02-09T19:01:50.499683939Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:01:50.507435 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4094524897-merged.mount: Deactivated successfully. Feb 9 19:01:50.591957 env[1655]: time="2024-02-09T19:01:50.591891112Z" level=info msg="Loading containers: start." Feb 9 19:01:50.759779 kernel: Initializing XFRM netlink socket Feb 9 19:01:50.799598 env[1655]: time="2024-02-09T19:01:50.799552628Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:01:50.908066 systemd-networkd[1456]: docker0: Link UP Feb 9 19:01:50.926287 env[1655]: time="2024-02-09T19:01:50.926241303Z" level=info msg="Loading containers: done." Feb 9 19:01:50.941996 env[1655]: time="2024-02-09T19:01:50.941947186Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:01:50.942211 env[1655]: time="2024-02-09T19:01:50.942177088Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:01:50.942329 env[1655]: time="2024-02-09T19:01:50.942305090Z" level=info msg="Daemon has completed initialization" Feb 9 19:01:50.971434 systemd[1]: Started docker.service. Feb 9 19:01:50.980831 env[1655]: time="2024-02-09T19:01:50.980740437Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:01:50.997670 systemd[1]: Reloading. Feb 9 19:01:51.065856 /usr/lib/systemd/system-generators/torcx-generator[1783]: time="2024-02-09T19:01:51Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:01:51.065894 /usr/lib/systemd/system-generators/torcx-generator[1783]: time="2024-02-09T19:01:51Z" level=info msg="torcx already run" Feb 9 19:01:51.169769 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:01:51.169790 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:01:51.187850 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:01:51.274175 systemd[1]: Started kubelet.service. Feb 9 19:01:51.349502 kubelet[1845]: E0209 19:01:51.349144 1845 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:01:51.351271 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:01:51.351431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:01:56.494545 env[1309]: time="2024-02-09T19:01:56.494469530Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\"" Feb 9 19:01:57.151964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2079144043.mount: Deactivated successfully. Feb 9 19:01:59.350214 env[1309]: time="2024-02-09T19:01:59.350153560Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.355866 env[1309]: time="2024-02-09T19:01:59.355820297Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.362490 env[1309]: time="2024-02-09T19:01:59.362447540Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.366467 env[1309]: time="2024-02-09T19:01:59.366424866Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:98a686df810b9f1de8e3b2ae869e79c51a36e7434d33c53f011852618aec0a68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:01:59.367088 env[1309]: time="2024-02-09T19:01:59.367054670Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.28.6\" returns image reference \"sha256:70e88c5e3a8e409ff4604a5fdb1dacb736ea02ba0b7a3da635f294e953906f47\"" Feb 9 19:01:59.377513 env[1309]: time="2024-02-09T19:01:59.377473438Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\"" Feb 9 19:02:01.258807 env[1309]: time="2024-02-09T19:02:01.258733875Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:01.265298 env[1309]: time="2024-02-09T19:02:01.265250712Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:01.270989 env[1309]: time="2024-02-09T19:02:01.270946045Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:01.277092 env[1309]: time="2024-02-09T19:02:01.277048080Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:80bdcd72cfe26028bb2fed75732fc2f511c35fa8d1edc03deae11f3490713c9e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:01.277740 env[1309]: time="2024-02-09T19:02:01.277702283Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.28.6\" returns image reference \"sha256:18dbd2df3bb54036300d2af8b20ef60d479173946ff089a4d16e258b27faa55c\"" Feb 9 19:02:01.288281 env[1309]: time="2024-02-09T19:02:01.288250644Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\"" Feb 9 19:02:01.508328 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:02:01.508668 systemd[1]: Stopped kubelet.service. Feb 9 19:02:01.511012 systemd[1]: Started kubelet.service. Feb 9 19:02:01.561472 kubelet[1874]: E0209 19:02:01.561419 1874 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:02:01.564901 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:02:01.565064 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:02:02.866890 env[1309]: time="2024-02-09T19:02:02.866822069Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:02.877946 env[1309]: time="2024-02-09T19:02:02.877892528Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:02.883645 env[1309]: time="2024-02-09T19:02:02.883600359Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:02.890028 env[1309]: time="2024-02-09T19:02:02.889982593Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:a89db556c34d652d403d909882dbd97336f2e935b1c726b2e2b2c0400186ac39,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:02.890589 env[1309]: time="2024-02-09T19:02:02.890555096Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.28.6\" returns image reference \"sha256:7597ecaaf12074e2980eee086736dbd01e566dc266351560001aa47dbbb0e5fe\"" Feb 9 19:02:02.901094 env[1309]: time="2024-02-09T19:02:02.901047653Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:02:04.149350 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1248192838.mount: Deactivated successfully. Feb 9 19:02:04.705508 env[1309]: time="2024-02-09T19:02:04.705447045Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:04.710471 env[1309]: time="2024-02-09T19:02:04.710424168Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:04.714038 env[1309]: time="2024-02-09T19:02:04.713987285Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:04.721148 env[1309]: time="2024-02-09T19:02:04.721104418Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:04.721458 env[1309]: time="2024-02-09T19:02:04.721424520Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:02:04.732085 env[1309]: time="2024-02-09T19:02:04.732036270Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:02:05.274695 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1906517954.mount: Deactivated successfully. Feb 9 19:02:05.298980 env[1309]: time="2024-02-09T19:02:05.298924755Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:05.308122 env[1309]: time="2024-02-09T19:02:05.308074695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:05.314652 env[1309]: time="2024-02-09T19:02:05.314611124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:05.320287 env[1309]: time="2024-02-09T19:02:05.320246449Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:05.320712 env[1309]: time="2024-02-09T19:02:05.320679151Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:02:05.330996 env[1309]: time="2024-02-09T19:02:05.330960096Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\"" Feb 9 19:02:05.887038 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3402616994.mount: Deactivated successfully. Feb 9 19:02:11.243827 env[1309]: time="2024-02-09T19:02:11.243769879Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:11.250381 env[1309]: time="2024-02-09T19:02:11.250331299Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:11.254044 env[1309]: time="2024-02-09T19:02:11.254003710Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.9-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:11.260246 env[1309]: time="2024-02-09T19:02:11.260208528Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:11.260919 env[1309]: time="2024-02-09T19:02:11.260881530Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.9-0\" returns image reference \"sha256:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9\"" Feb 9 19:02:11.271941 env[1309]: time="2024-02-09T19:02:11.271904263Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:02:11.758129 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:02:11.758419 systemd[1]: Stopped kubelet.service. Feb 9 19:02:11.760270 systemd[1]: Started kubelet.service. Feb 9 19:02:11.824961 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2236102016.mount: Deactivated successfully. Feb 9 19:02:11.838302 kubelet[1900]: E0209 19:02:11.838256 1900 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:02:11.843334 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:02:11.843450 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:02:12.548321 env[1309]: time="2024-02-09T19:02:12.548252192Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:12.555530 env[1309]: time="2024-02-09T19:02:12.555482912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:12.559513 env[1309]: time="2024-02-09T19:02:12.559474923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:12.563206 env[1309]: time="2024-02-09T19:02:12.563168334Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:12.563700 env[1309]: time="2024-02-09T19:02:12.563669135Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:02:15.142074 systemd[1]: Stopped kubelet.service. Feb 9 19:02:15.157602 systemd[1]: Reloading. Feb 9 19:02:15.241747 /usr/lib/systemd/system-generators/torcx-generator[1991]: time="2024-02-09T19:02:15Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:15.242218 /usr/lib/systemd/system-generators/torcx-generator[1991]: time="2024-02-09T19:02:15Z" level=info msg="torcx already run" Feb 9 19:02:15.329038 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:15.329057 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:15.347176 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:15.438938 systemd[1]: Started kubelet.service. Feb 9 19:02:15.485846 kubelet[2053]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:15.486195 kubelet[2053]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:15.486237 kubelet[2053]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:15.486419 kubelet[2053]: I0209 19:02:15.486385 2053 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:02:15.737582 kubelet[2053]: I0209 19:02:15.737468 2053 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:02:15.737582 kubelet[2053]: I0209 19:02:15.737497 2053 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:02:15.738168 kubelet[2053]: I0209 19:02:15.738134 2053 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:02:15.742532 kubelet[2053]: E0209 19:02:15.742506 2053 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.742690 kubelet[2053]: I0209 19:02:15.742670 2053 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:15.747494 kubelet[2053]: I0209 19:02:15.747471 2053 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:02:15.747739 kubelet[2053]: I0209 19:02:15.747719 2053 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:02:15.747931 kubelet[2053]: I0209 19:02:15.747912 2053 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:02:15.748070 kubelet[2053]: I0209 19:02:15.747939 2053 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:02:15.748070 kubelet[2053]: I0209 19:02:15.747951 2053 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:02:15.748070 kubelet[2053]: I0209 19:02:15.748067 2053 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:15.748195 kubelet[2053]: I0209 19:02:15.748172 2053 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:02:15.748195 kubelet[2053]: I0209 19:02:15.748195 2053 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:02:15.748267 kubelet[2053]: I0209 19:02:15.748225 2053 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:02:15.748267 kubelet[2053]: I0209 19:02:15.748248 2053 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:02:15.749130 kubelet[2053]: W0209 19:02:15.749085 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-075ad2fc80&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.749255 kubelet[2053]: E0209 19:02:15.749244 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-075ad2fc80&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.749416 kubelet[2053]: W0209 19:02:15.749381 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.749502 kubelet[2053]: E0209 19:02:15.749493 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.749653 kubelet[2053]: I0209 19:02:15.749640 2053 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:02:15.750006 kubelet[2053]: W0209 19:02:15.749991 2053 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:02:15.750826 kubelet[2053]: I0209 19:02:15.750807 2053 server.go:1232] "Started kubelet" Feb 9 19:02:15.756375 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:02:15.756443 kubelet[2053]: E0209 19:02:15.753850 2053 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-075ad2fc80.17b2470e5d82772e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-075ad2fc80", UID:"ci-3510.3.2-a-075ad2fc80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-075ad2fc80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 750784814, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 750784814, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-075ad2fc80"}': 'Post "https://10.200.8.10:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.10:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:02:15.756443 kubelet[2053]: I0209 19:02:15.754000 2053 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:02:15.756443 kubelet[2053]: I0209 19:02:15.754220 2053 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:02:15.756443 kubelet[2053]: I0209 19:02:15.754257 2053 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:02:15.756443 kubelet[2053]: I0209 19:02:15.754916 2053 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:02:15.757037 kubelet[2053]: I0209 19:02:15.757023 2053 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:02:15.757183 kubelet[2053]: E0209 19:02:15.757166 2053 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:02:15.757241 kubelet[2053]: E0209 19:02:15.757194 2053 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:02:15.759011 kubelet[2053]: I0209 19:02:15.758993 2053 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:02:15.759990 kubelet[2053]: I0209 19:02:15.759973 2053 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:02:15.761110 kubelet[2053]: I0209 19:02:15.761089 2053 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:02:15.761987 kubelet[2053]: W0209 19:02:15.761730 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.762080 kubelet[2053]: E0209 19:02:15.761989 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.762307 kubelet[2053]: E0209 19:02:15.762282 2053 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Feb 9 19:02:15.830796 kubelet[2053]: I0209 19:02:15.830733 2053 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:02:15.830960 kubelet[2053]: I0209 19:02:15.830942 2053 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:02:15.831058 kubelet[2053]: I0209 19:02:15.830969 2053 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:15.836004 kubelet[2053]: I0209 19:02:15.835972 2053 policy_none.go:49] "None policy: Start" Feb 9 19:02:15.836746 kubelet[2053]: I0209 19:02:15.836723 2053 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:02:15.836886 kubelet[2053]: I0209 19:02:15.836782 2053 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:02:15.844410 systemd[1]: Created slice kubepods.slice. Feb 9 19:02:15.848626 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:02:15.854135 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:02:15.857111 kubelet[2053]: I0209 19:02:15.857089 2053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:02:15.858469 kubelet[2053]: I0209 19:02:15.858445 2053 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:02:15.858680 kubelet[2053]: I0209 19:02:15.858663 2053 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:02:15.860458 kubelet[2053]: I0209 19:02:15.859666 2053 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:02:15.860458 kubelet[2053]: I0209 19:02:15.859686 2053 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:02:15.860458 kubelet[2053]: I0209 19:02:15.859720 2053 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:02:15.860458 kubelet[2053]: E0209 19:02:15.859802 2053 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:02:15.860458 kubelet[2053]: W0209 19:02:15.860371 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.860458 kubelet[2053]: E0209 19:02:15.860402 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:15.860736 kubelet[2053]: E0209 19:02:15.860590 2053 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-075ad2fc80\" not found" Feb 9 19:02:15.862776 kubelet[2053]: I0209 19:02:15.862744 2053 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.863284 kubelet[2053]: E0209 19:02:15.863223 2053 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.960919 kubelet[2053]: I0209 19:02:15.960872 2053 topology_manager.go:215] "Topology Admit Handler" podUID="43b18734350b624d63fd3a02b2eaee96" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.961668 kubelet[2053]: I0209 19:02:15.961553 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.961668 kubelet[2053]: I0209 19:02:15.961608 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.961668 kubelet[2053]: I0209 19:02:15.961643 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.962812 kubelet[2053]: E0209 19:02:15.962780 2053 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Feb 9 19:02:15.963070 kubelet[2053]: I0209 19:02:15.963043 2053 topology_manager.go:215] "Topology Admit Handler" podUID="f3f80c2f57ebee28a8f008674f7847f3" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.964764 kubelet[2053]: I0209 19:02:15.964733 2053 topology_manager.go:215] "Topology Admit Handler" podUID="8c12e7118ddf523434e9ce4366560805" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:15.971037 systemd[1]: Created slice kubepods-burstable-pod43b18734350b624d63fd3a02b2eaee96.slice. Feb 9 19:02:15.979663 systemd[1]: Created slice kubepods-burstable-podf3f80c2f57ebee28a8f008674f7847f3.slice. Feb 9 19:02:15.984407 systemd[1]: Created slice kubepods-burstable-pod8c12e7118ddf523434e9ce4366560805.slice. Feb 9 19:02:16.061852 kubelet[2053]: I0209 19:02:16.061801 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.062088 kubelet[2053]: I0209 19:02:16.061885 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.062088 kubelet[2053]: I0209 19:02:16.061919 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.062088 kubelet[2053]: I0209 19:02:16.061949 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.062088 kubelet[2053]: I0209 19:02:16.061979 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.062321 kubelet[2053]: I0209 19:02:16.062099 2053 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c12e7118ddf523434e9ce4366560805-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-075ad2fc80\" (UID: \"8c12e7118ddf523434e9ce4366560805\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.065252 kubelet[2053]: I0209 19:02:16.065224 2053 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.065665 kubelet[2053]: E0209 19:02:16.065622 2053 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.279200 env[1309]: time="2024-02-09T19:02:16.279146200Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-075ad2fc80,Uid:43b18734350b624d63fd3a02b2eaee96,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:16.284143 env[1309]: time="2024-02-09T19:02:16.284102410Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-075ad2fc80,Uid:f3f80c2f57ebee28a8f008674f7847f3,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:16.287170 env[1309]: time="2024-02-09T19:02:16.287131517Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-075ad2fc80,Uid:8c12e7118ddf523434e9ce4366560805,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:16.364028 kubelet[2053]: E0209 19:02:16.363922 2053 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Feb 9 19:02:16.467230 kubelet[2053]: I0209 19:02:16.467192 2053 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.467558 kubelet[2053]: E0209 19:02:16.467533 2053 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:16.551148 kubelet[2053]: W0209 19:02:16.551083 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-075ad2fc80&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:16.551148 kubelet[2053]: E0209 19:02:16.551152 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-075ad2fc80&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:16.718670 kubelet[2053]: W0209 19:02:16.718507 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:16.718670 kubelet[2053]: E0209 19:02:16.718582 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:16.792420 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2936739077.mount: Deactivated successfully. Feb 9 19:02:16.823947 env[1309]: time="2024-02-09T19:02:16.823884483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.827498 env[1309]: time="2024-02-09T19:02:16.827455591Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.837284 env[1309]: time="2024-02-09T19:02:16.837244512Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.839922 env[1309]: time="2024-02-09T19:02:16.839888818Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.843657 env[1309]: time="2024-02-09T19:02:16.843622226Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.846693 env[1309]: time="2024-02-09T19:02:16.846653233Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.852327 env[1309]: time="2024-02-09T19:02:16.852287245Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.857089 env[1309]: time="2024-02-09T19:02:16.857052756Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.865117 env[1309]: time="2024-02-09T19:02:16.865073173Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.871079 env[1309]: time="2024-02-09T19:02:16.871038686Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.879386 env[1309]: time="2024-02-09T19:02:16.879346504Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.888132 env[1309]: time="2024-02-09T19:02:16.888090323Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:16.926519 env[1309]: time="2024-02-09T19:02:16.926428206Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:16.926519 env[1309]: time="2024-02-09T19:02:16.926466506Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:16.926829 env[1309]: time="2024-02-09T19:02:16.926481806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:16.926829 env[1309]: time="2024-02-09T19:02:16.926689207Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a pid=2093 runtime=io.containerd.runc.v2 Feb 9 19:02:16.948106 systemd[1]: Started cri-containerd-ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a.scope. Feb 9 19:02:16.978882 env[1309]: time="2024-02-09T19:02:16.976001614Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:16.978882 env[1309]: time="2024-02-09T19:02:16.976045914Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:16.978882 env[1309]: time="2024-02-09T19:02:16.976070414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:16.978882 env[1309]: time="2024-02-09T19:02:16.976225015Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209 pid=2129 runtime=io.containerd.runc.v2 Feb 9 19:02:17.011032 env[1309]: time="2024-02-09T19:02:17.010923789Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:17.011258 env[1309]: time="2024-02-09T19:02:17.010985589Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:17.011258 env[1309]: time="2024-02-09T19:02:17.011001489Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:17.011258 env[1309]: time="2024-02-09T19:02:17.011183989Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c4651663b0fb4650823578e06ec19d8de0ac74e9a10fbdddabff754fcacf7696 pid=2149 runtime=io.containerd.runc.v2 Feb 9 19:02:17.017778 env[1309]: time="2024-02-09T19:02:17.017707702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-075ad2fc80,Uid:f3f80c2f57ebee28a8f008674f7847f3,Namespace:kube-system,Attempt:0,} returns sandbox id \"ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a\"" Feb 9 19:02:17.018967 systemd[1]: Started cri-containerd-73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209.scope. Feb 9 19:02:17.025121 env[1309]: time="2024-02-09T19:02:17.025079917Z" level=info msg="CreateContainer within sandbox \"ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:02:17.052415 systemd[1]: Started cri-containerd-c4651663b0fb4650823578e06ec19d8de0ac74e9a10fbdddabff754fcacf7696.scope. Feb 9 19:02:17.069662 env[1309]: time="2024-02-09T19:02:17.069603708Z" level=info msg="CreateContainer within sandbox \"ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0\"" Feb 9 19:02:17.070418 env[1309]: time="2024-02-09T19:02:17.070387110Z" level=info msg="StartContainer for \"787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0\"" Feb 9 19:02:17.098952 systemd[1]: Started cri-containerd-787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0.scope. Feb 9 19:02:17.106230 env[1309]: time="2024-02-09T19:02:17.106182583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-075ad2fc80,Uid:8c12e7118ddf523434e9ce4366560805,Namespace:kube-system,Attempt:0,} returns sandbox id \"73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209\"" Feb 9 19:02:17.109944 kubelet[2053]: W0209 19:02:17.109855 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:17.109944 kubelet[2053]: E0209 19:02:17.109900 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:17.110254 env[1309]: time="2024-02-09T19:02:17.109416689Z" level=info msg="CreateContainer within sandbox \"73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:02:17.129511 env[1309]: time="2024-02-09T19:02:17.129458930Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-075ad2fc80,Uid:43b18734350b624d63fd3a02b2eaee96,Namespace:kube-system,Attempt:0,} returns sandbox id \"c4651663b0fb4650823578e06ec19d8de0ac74e9a10fbdddabff754fcacf7696\"" Feb 9 19:02:17.133584 env[1309]: time="2024-02-09T19:02:17.133533938Z" level=info msg="CreateContainer within sandbox \"c4651663b0fb4650823578e06ec19d8de0ac74e9a10fbdddabff754fcacf7696\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:02:17.164722 kubelet[2053]: E0209 19:02:17.164663 2053 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Feb 9 19:02:17.261873 env[1309]: time="2024-02-09T19:02:17.261820000Z" level=info msg="StartContainer for \"787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0\" returns successfully" Feb 9 19:02:17.270131 kubelet[2053]: I0209 19:02:17.269648 2053 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:17.270131 kubelet[2053]: E0209 19:02:17.270110 2053 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:17.655143 kubelet[2053]: W0209 19:02:17.307867 2053 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:17.655143 kubelet[2053]: E0209 19:02:17.307919 2053 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:17.670978 env[1309]: time="2024-02-09T19:02:17.670890233Z" level=info msg="CreateContainer within sandbox \"73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b\"" Feb 9 19:02:17.671636 env[1309]: time="2024-02-09T19:02:17.671599735Z" level=info msg="StartContainer for \"5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b\"" Feb 9 19:02:17.676138 env[1309]: time="2024-02-09T19:02:17.676094344Z" level=info msg="CreateContainer within sandbox \"c4651663b0fb4650823578e06ec19d8de0ac74e9a10fbdddabff754fcacf7696\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b0f9c9f9112685a34a039c79108719cc2abe5d7a830b79296b6e636abfcf6578\"" Feb 9 19:02:17.676642 env[1309]: time="2024-02-09T19:02:17.676607345Z" level=info msg="StartContainer for \"b0f9c9f9112685a34a039c79108719cc2abe5d7a830b79296b6e636abfcf6578\"" Feb 9 19:02:17.709507 systemd[1]: Started cri-containerd-b0f9c9f9112685a34a039c79108719cc2abe5d7a830b79296b6e636abfcf6578.scope. Feb 9 19:02:17.725210 systemd[1]: Started cri-containerd-5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b.scope. Feb 9 19:02:17.777895 kubelet[2053]: E0209 19:02:17.777431 2053 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Feb 9 19:02:17.792568 systemd[1]: run-containerd-runc-k8s.io-ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a-runc.dJ9I4B.mount: Deactivated successfully. Feb 9 19:02:17.816446 env[1309]: time="2024-02-09T19:02:17.816386430Z" level=info msg="StartContainer for \"5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b\" returns successfully" Feb 9 19:02:17.833054 env[1309]: time="2024-02-09T19:02:17.832985163Z" level=info msg="StartContainer for \"b0f9c9f9112685a34a039c79108719cc2abe5d7a830b79296b6e636abfcf6578\" returns successfully" Feb 9 19:02:18.872674 kubelet[2053]: I0209 19:02:18.872644 2053 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:19.429520 kubelet[2053]: E0209 19:02:19.429477 2053 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-075ad2fc80\" not found" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:19.446335 kubelet[2053]: I0209 19:02:19.446297 2053 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:19.449161 kubelet[2053]: E0209 19:02:19.449135 2053 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"ci-3510.3.2-a-075ad2fc80\": nodes \"ci-3510.3.2-a-075ad2fc80\" not found" Feb 9 19:02:19.588198 kubelet[2053]: E0209 19:02:19.588068 2053 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-075ad2fc80.17b2470e5d82772e", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-075ad2fc80", UID:"ci-3510.3.2-a-075ad2fc80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-075ad2fc80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 750784814, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 750784814, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-075ad2fc80"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:02:19.674020 kubelet[2053]: E0209 19:02:19.673902 2053 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-075ad2fc80.17b2470e5de41a8d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-075ad2fc80", UID:"ci-3510.3.2-a-075ad2fc80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-075ad2fc80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 757183629, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 757183629, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-075ad2fc80"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:02:19.750623 kubelet[2053]: I0209 19:02:19.750483 2053 apiserver.go:52] "Watching apiserver" Feb 9 19:02:19.761139 kubelet[2053]: I0209 19:02:19.761095 2053 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:02:19.772741 kubelet[2053]: E0209 19:02:19.772623 2053 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-075ad2fc80.17b2470e6239f516", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-075ad2fc80", UID:"ci-3510.3.2-a-075ad2fc80", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-075ad2fc80 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-075ad2fc80"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 829918998, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 2, 15, 829918998, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"ci-3510.3.2-a-075ad2fc80"}': 'namespaces "default" not found' (will not retry!) Feb 9 19:02:19.926954 kubelet[2053]: E0209 19:02:19.926909 2053 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:22.581623 systemd[1]: Reloading. Feb 9 19:02:22.702076 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-02-09T19:02:22Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:22.707845 /usr/lib/systemd/system-generators/torcx-generator[2342]: time="2024-02-09T19:02:22Z" level=info msg="torcx already run" Feb 9 19:02:22.781749 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:22.781979 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:22.801951 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:23.808706 kubelet[2053]: I0209 19:02:22.918921 2053 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:22.920903 systemd[1]: Stopping kubelet.service... Feb 9 19:02:23.809325 kubelet[2402]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:23.809325 kubelet[2402]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:02:23.809325 kubelet[2402]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:02:23.809325 kubelet[2402]: I0209 19:02:23.016855 2402 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:02:23.809325 kubelet[2402]: I0209 19:02:23.023823 2402 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:02:23.809325 kubelet[2402]: I0209 19:02:23.023839 2402 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:02:23.809325 kubelet[2402]: I0209 19:02:23.024060 2402 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:02:22.935217 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:02:22.935440 systemd[1]: Stopped kubelet.service. Feb 9 19:02:22.937635 systemd[1]: Started kubelet.service. Feb 9 19:02:23.812980 kubelet[2402]: I0209 19:02:23.812953 2402 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:02:23.817496 kubelet[2402]: I0209 19:02:23.816746 2402 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:02:23.823803 kubelet[2402]: I0209 19:02:23.823777 2402 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:02:23.824055 kubelet[2402]: I0209 19:02:23.824034 2402 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:02:23.824353 kubelet[2402]: I0209 19:02:23.824287 2402 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:02:23.824505 kubelet[2402]: I0209 19:02:23.824361 2402 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:02:23.824505 kubelet[2402]: I0209 19:02:23.824375 2402 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:02:23.824505 kubelet[2402]: I0209 19:02:23.824421 2402 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:23.824638 kubelet[2402]: I0209 19:02:23.824566 2402 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:02:23.824638 kubelet[2402]: I0209 19:02:23.824588 2402 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:02:23.829808 kubelet[2402]: I0209 19:02:23.825045 2402 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:02:23.831066 kubelet[2402]: I0209 19:02:23.831035 2402 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:02:23.833070 kubelet[2402]: I0209 19:02:23.833055 2402 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:02:23.833897 kubelet[2402]: I0209 19:02:23.833881 2402 server.go:1232] "Started kubelet" Feb 9 19:02:23.837310 kubelet[2402]: I0209 19:02:23.836745 2402 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:02:23.838218 kubelet[2402]: I0209 19:02:23.838198 2402 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:02:23.842908 kubelet[2402]: I0209 19:02:23.842881 2402 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:02:23.843002 kubelet[2402]: I0209 19:02:23.842958 2402 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:02:23.843837 kubelet[2402]: I0209 19:02:23.843817 2402 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:02:23.845870 kubelet[2402]: E0209 19:02:23.845850 2402 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:02:23.845961 kubelet[2402]: E0209 19:02:23.845881 2402 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:02:23.855294 kubelet[2402]: I0209 19:02:23.855267 2402 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:02:23.856550 kubelet[2402]: I0209 19:02:23.856524 2402 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:02:23.860789 kubelet[2402]: I0209 19:02:23.857038 2402 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:02:23.888963 kubelet[2402]: I0209 19:02:23.888931 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:02:23.891275 kubelet[2402]: I0209 19:02:23.891223 2402 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:02:23.891275 kubelet[2402]: I0209 19:02:23.891268 2402 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:02:23.891515 kubelet[2402]: I0209 19:02:23.891311 2402 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:02:23.891515 kubelet[2402]: E0209 19:02:23.891406 2402 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:02:23.940611 kubelet[2402]: I0209 19:02:23.940580 2402 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:02:23.940886 kubelet[2402]: I0209 19:02:23.940870 2402 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:02:23.941030 kubelet[2402]: I0209 19:02:23.941019 2402 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:02:23.941382 kubelet[2402]: I0209 19:02:23.941364 2402 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:02:23.941533 kubelet[2402]: I0209 19:02:23.941524 2402 state_mem.go:96] "Updated CPUSet assignments" assignments={} Feb 9 19:02:23.941588 kubelet[2402]: I0209 19:02:23.941582 2402 policy_none.go:49] "None policy: Start" Feb 9 19:02:23.942304 kubelet[2402]: I0209 19:02:23.942289 2402 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:02:23.942395 kubelet[2402]: I0209 19:02:23.942388 2402 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:02:23.942543 kubelet[2402]: I0209 19:02:23.942536 2402 state_mem.go:75] "Updated machine memory state" Feb 9 19:02:23.947138 kubelet[2402]: I0209 19:02:23.947097 2402 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:02:23.947416 kubelet[2402]: I0209 19:02:23.947397 2402 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:02:23.959883 kubelet[2402]: I0209 19:02:23.959859 2402 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.973939 kubelet[2402]: I0209 19:02:23.973893 2402 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.974199 kubelet[2402]: I0209 19:02:23.974188 2402 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.991579 kubelet[2402]: I0209 19:02:23.991545 2402 topology_manager.go:215] "Topology Admit Handler" podUID="43b18734350b624d63fd3a02b2eaee96" podNamespace="kube-system" podName="kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.991802 kubelet[2402]: I0209 19:02:23.991684 2402 topology_manager.go:215] "Topology Admit Handler" podUID="f3f80c2f57ebee28a8f008674f7847f3" podNamespace="kube-system" podName="kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.991802 kubelet[2402]: I0209 19:02:23.991782 2402 topology_manager.go:215] "Topology Admit Handler" podUID="8c12e7118ddf523434e9ce4366560805" podNamespace="kube-system" podName="kube-scheduler-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:23.997678 kubelet[2402]: W0209 19:02:23.997646 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:02:24.001475 kubelet[2402]: W0209 19:02:24.000555 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:02:24.003598 kubelet[2402]: W0209 19:02:24.003572 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:02:24.060103 kubelet[2402]: I0209 19:02:24.059975 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060103 kubelet[2402]: I0209 19:02:24.060052 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060318 kubelet[2402]: I0209 19:02:24.060121 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060318 kubelet[2402]: I0209 19:02:24.060153 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060318 kubelet[2402]: I0209 19:02:24.060226 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060318 kubelet[2402]: I0209 19:02:24.060278 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c12e7118ddf523434e9ce4366560805-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-075ad2fc80\" (UID: \"8c12e7118ddf523434e9ce4366560805\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060318 kubelet[2402]: I0209 19:02:24.060314 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/43b18734350b624d63fd3a02b2eaee96-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" (UID: \"43b18734350b624d63fd3a02b2eaee96\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060527 kubelet[2402]: I0209 19:02:24.060365 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.060527 kubelet[2402]: I0209 19:02:24.060435 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/f3f80c2f57ebee28a8f008674f7847f3-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-075ad2fc80\" (UID: \"f3f80c2f57ebee28a8f008674f7847f3\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.831903 kubelet[2402]: I0209 19:02:24.831864 2402 apiserver.go:52] "Watching apiserver" Feb 9 19:02:24.857813 kubelet[2402]: I0209 19:02:24.857776 2402 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:02:24.925705 kubelet[2402]: W0209 19:02:24.925668 2402 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:02:24.925902 kubelet[2402]: E0209 19:02:24.925770 2402 kubelet.go:1890] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-075ad2fc80\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" Feb 9 19:02:24.941579 kubelet[2402]: I0209 19:02:24.941544 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" podStartSLOduration=1.941499138 podCreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:24.933082327 +0000 UTC m=+1.989822979" watchObservedRunningTime="2024-02-09 19:02:24.941499138 +0000 UTC m=+1.998239890" Feb 9 19:02:24.941947 kubelet[2402]: I0209 19:02:24.941929 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-075ad2fc80" podStartSLOduration=0.941896038 podCreationTimestamp="2024-02-09 19:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:24.941499938 +0000 UTC m=+1.998240690" watchObservedRunningTime="2024-02-09 19:02:24.941896038 +0000 UTC m=+1.998636690" Feb 9 19:02:24.954545 kubelet[2402]: I0209 19:02:24.954512 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-075ad2fc80" podStartSLOduration=1.954459454 podCreationTimestamp="2024-02-09 19:02:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:24.947279345 +0000 UTC m=+2.004019997" watchObservedRunningTime="2024-02-09 19:02:24.954459454 +0000 UTC m=+2.011200206" Feb 9 19:02:25.598561 sudo[1578]: pam_unix(sudo:session): session closed for user root Feb 9 19:02:26.511677 sshd[1571]: pam_unix(sshd:session): session closed for user core Feb 9 19:02:26.515169 systemd[1]: sshd@4-10.200.8.10:22-10.200.12.6:43880.service: Deactivated successfully. Feb 9 19:02:26.516082 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:02:26.516282 systemd[1]: session-7.scope: Consumed 3.010s CPU time. Feb 9 19:02:26.517027 systemd-logind[1295]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:02:26.517910 systemd-logind[1295]: Removed session 7. Feb 9 19:02:36.642740 kubelet[2402]: I0209 19:02:36.642708 2402 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:02:36.644098 env[1309]: time="2024-02-09T19:02:36.644046587Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:02:36.644746 kubelet[2402]: I0209 19:02:36.644721 2402 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:02:37.320557 kubelet[2402]: I0209 19:02:37.320514 2402 topology_manager.go:215] "Topology Admit Handler" podUID="d6c80a0a-c623-4f40-aa5d-4488f6153871" podNamespace="kube-system" podName="kube-proxy-z4vxr" Feb 9 19:02:37.322302 kubelet[2402]: I0209 19:02:37.322282 2402 topology_manager.go:215] "Topology Admit Handler" podUID="6e108d9e-a0b9-4a24-ae5a-41707cd660d0" podNamespace="kube-flannel" podName="kube-flannel-ds-55qfw" Feb 9 19:02:37.328154 systemd[1]: Created slice kubepods-besteffort-podd6c80a0a_c623_4f40_aa5d_4488f6153871.slice. Feb 9 19:02:37.329621 kubelet[2402]: W0209 19:02:37.329600 2402 reflector.go:535] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.329807 kubelet[2402]: E0209 19:02:37.329792 2402 reflector.go:147] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330118 kubelet[2402]: W0209 19:02:37.330099 2402 reflector.go:535] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330245 kubelet[2402]: E0209 19:02:37.330234 2402 reflector.go:147] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330527 kubelet[2402]: W0209 19:02:37.330505 2402 reflector.go:535] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330622 kubelet[2402]: E0209 19:02:37.330534 2402 reflector.go:147] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330788 kubelet[2402]: W0209 19:02:37.330742 2402 reflector.go:535] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.330922 kubelet[2402]: E0209 19:02:37.330909 2402 reflector.go:147] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-075ad2fc80" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3510.3.2-a-075ad2fc80' and this object Feb 9 19:02:37.338848 systemd[1]: Created slice kubepods-burstable-pod6e108d9e_a0b9_4a24_ae5a_41707cd660d0.slice. Feb 9 19:02:37.347151 kubelet[2402]: I0209 19:02:37.347121 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d6c80a0a-c623-4f40-aa5d-4488f6153871-lib-modules\") pod \"kube-proxy-z4vxr\" (UID: \"d6c80a0a-c623-4f40-aa5d-4488f6153871\") " pod="kube-system/kube-proxy-z4vxr" Feb 9 19:02:37.347289 kubelet[2402]: I0209 19:02:37.347177 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-run\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:37.347289 kubelet[2402]: I0209 19:02:37.347203 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d6c80a0a-c623-4f40-aa5d-4488f6153871-kube-proxy\") pod \"kube-proxy-z4vxr\" (UID: \"d6c80a0a-c623-4f40-aa5d-4488f6153871\") " pod="kube-system/kube-proxy-z4vxr" Feb 9 19:02:37.347289 kubelet[2402]: I0209 19:02:37.347227 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d6c80a0a-c623-4f40-aa5d-4488f6153871-xtables-lock\") pod \"kube-proxy-z4vxr\" (UID: \"d6c80a0a-c623-4f40-aa5d-4488f6153871\") " pod="kube-system/kube-proxy-z4vxr" Feb 9 19:02:37.347289 kubelet[2402]: I0209 19:02:37.347268 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5qn6\" (UniqueName: \"kubernetes.io/projected/d6c80a0a-c623-4f40-aa5d-4488f6153871-kube-api-access-x5qn6\") pod \"kube-proxy-z4vxr\" (UID: \"d6c80a0a-c623-4f40-aa5d-4488f6153871\") " pod="kube-system/kube-proxy-z4vxr" Feb 9 19:02:37.347469 kubelet[2402]: I0209 19:02:37.347293 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k4zd2\" (UniqueName: \"kubernetes.io/projected/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-kube-api-access-k4zd2\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:37.347469 kubelet[2402]: I0209 19:02:37.347334 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-cni-plugin\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:37.347469 kubelet[2402]: I0209 19:02:37.347363 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-cni\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:37.347469 kubelet[2402]: I0209 19:02:37.347405 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-flannel-cfg\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:37.347469 kubelet[2402]: I0209 19:02:37.347437 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-xtables-lock\") pod \"kube-flannel-ds-55qfw\" (UID: \"6e108d9e-a0b9-4a24-ae5a-41707cd660d0\") " pod="kube-flannel/kube-flannel-ds-55qfw" Feb 9 19:02:38.449180 kubelet[2402]: E0209 19:02:38.449132 2402 configmap.go:199] Couldn't get configMap kube-flannel/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.450101 kubelet[2402]: E0209 19:02:38.449261 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-flannel-cfg podName:6e108d9e-a0b9-4a24-ae5a-41707cd660d0 nodeName:}" failed. No retries permitted until 2024-02-09 19:02:38.949231196 +0000 UTC m=+16.005971948 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-flannel-cfg") pod "kube-flannel-ds-55qfw" (UID: "6e108d9e-a0b9-4a24-ae5a-41707cd660d0") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.464196 kubelet[2402]: E0209 19:02:38.464154 2402 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.464196 kubelet[2402]: E0209 19:02:38.464198 2402 projected.go:198] Error preparing data for projected volume kube-api-access-x5qn6 for pod kube-system/kube-proxy-z4vxr: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.464410 kubelet[2402]: E0209 19:02:38.464284 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/d6c80a0a-c623-4f40-aa5d-4488f6153871-kube-api-access-x5qn6 podName:d6c80a0a-c623-4f40-aa5d-4488f6153871 nodeName:}" failed. No retries permitted until 2024-02-09 19:02:38.964262004 +0000 UTC m=+16.021002756 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-x5qn6" (UniqueName: "kubernetes.io/projected/d6c80a0a-c623-4f40-aa5d-4488f6153871-kube-api-access-x5qn6") pod "kube-proxy-z4vxr" (UID: "d6c80a0a-c623-4f40-aa5d-4488f6153871") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.466764 kubelet[2402]: E0209 19:02:38.466724 2402 projected.go:292] Couldn't get configMap kube-flannel/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.466902 kubelet[2402]: E0209 19:02:38.466776 2402 projected.go:198] Error preparing data for projected volume kube-api-access-k4zd2 for pod kube-flannel/kube-flannel-ds-55qfw: failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:38.466902 kubelet[2402]: E0209 19:02:38.466828 2402 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-kube-api-access-k4zd2 podName:6e108d9e-a0b9-4a24-ae5a-41707cd660d0 nodeName:}" failed. No retries permitted until 2024-02-09 19:02:38.966811406 +0000 UTC m=+16.023552058 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-k4zd2" (UniqueName: "kubernetes.io/projected/6e108d9e-a0b9-4a24-ae5a-41707cd660d0-kube-api-access-k4zd2") pod "kube-flannel-ds-55qfw" (UID: "6e108d9e-a0b9-4a24-ae5a-41707cd660d0") : failed to sync configmap cache: timed out waiting for the condition Feb 9 19:02:39.136654 env[1309]: time="2024-02-09T19:02:39.136609453Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4vxr,Uid:d6c80a0a-c623-4f40-aa5d-4488f6153871,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:39.143848 env[1309]: time="2024-02-09T19:02:39.143808857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-55qfw,Uid:6e108d9e-a0b9-4a24-ae5a-41707cd660d0,Namespace:kube-flannel,Attempt:0,}" Feb 9 19:02:39.174382 env[1309]: time="2024-02-09T19:02:39.174307672Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:39.174382 env[1309]: time="2024-02-09T19:02:39.174353872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:39.174687 env[1309]: time="2024-02-09T19:02:39.174367572Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:39.174818 env[1309]: time="2024-02-09T19:02:39.174719172Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/481bb15c0f2ca9a2df4c373056daa484ac54252cc366abb876193b8efacde2cd pid=2468 runtime=io.containerd.runc.v2 Feb 9 19:02:39.194892 systemd[1]: Started cri-containerd-481bb15c0f2ca9a2df4c373056daa484ac54252cc366abb876193b8efacde2cd.scope. Feb 9 19:02:39.206093 env[1309]: time="2024-02-09T19:02:39.206026887Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:39.206308 env[1309]: time="2024-02-09T19:02:39.206062687Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:39.206308 env[1309]: time="2024-02-09T19:02:39.206077787Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:39.206308 env[1309]: time="2024-02-09T19:02:39.206248787Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111 pid=2501 runtime=io.containerd.runc.v2 Feb 9 19:02:39.225140 systemd[1]: Started cri-containerd-1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111.scope. Feb 9 19:02:39.242462 env[1309]: time="2024-02-09T19:02:39.242413505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-z4vxr,Uid:d6c80a0a-c623-4f40-aa5d-4488f6153871,Namespace:kube-system,Attempt:0,} returns sandbox id \"481bb15c0f2ca9a2df4c373056daa484ac54252cc366abb876193b8efacde2cd\"" Feb 9 19:02:39.249228 env[1309]: time="2024-02-09T19:02:39.249179809Z" level=info msg="CreateContainer within sandbox \"481bb15c0f2ca9a2df4c373056daa484ac54252cc366abb876193b8efacde2cd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:02:39.281690 env[1309]: time="2024-02-09T19:02:39.281645625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-55qfw,Uid:6e108d9e-a0b9-4a24-ae5a-41707cd660d0,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\"" Feb 9 19:02:39.285639 env[1309]: time="2024-02-09T19:02:39.285115726Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Feb 9 19:02:39.302713 env[1309]: time="2024-02-09T19:02:39.302678035Z" level=info msg="CreateContainer within sandbox \"481bb15c0f2ca9a2df4c373056daa484ac54252cc366abb876193b8efacde2cd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"4d7c9c6fe6c71f82e15f5470274f682218da0df4fae60684700d82c51825fc97\"" Feb 9 19:02:39.304968 env[1309]: time="2024-02-09T19:02:39.304936236Z" level=info msg="StartContainer for \"4d7c9c6fe6c71f82e15f5470274f682218da0df4fae60684700d82c51825fc97\"" Feb 9 19:02:39.322698 systemd[1]: Started cri-containerd-4d7c9c6fe6c71f82e15f5470274f682218da0df4fae60684700d82c51825fc97.scope. Feb 9 19:02:39.369343 env[1309]: time="2024-02-09T19:02:39.369289268Z" level=info msg="StartContainer for \"4d7c9c6fe6c71f82e15f5470274f682218da0df4fae60684700d82c51825fc97\" returns successfully" Feb 9 19:02:39.957528 kubelet[2402]: I0209 19:02:39.957489 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-z4vxr" podStartSLOduration=2.957441657 podCreationTimestamp="2024-02-09 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:39.957182457 +0000 UTC m=+17.013923209" watchObservedRunningTime="2024-02-09 19:02:39.957441657 +0000 UTC m=+17.014182409" Feb 9 19:02:41.247310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1311869675.mount: Deactivated successfully. Feb 9 19:02:41.337293 env[1309]: time="2024-02-09T19:02:41.337239086Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:41.342470 env[1309]: time="2024-02-09T19:02:41.342424688Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:41.345624 env[1309]: time="2024-02-09T19:02:41.345586190Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel-cni-plugin:v1.1.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:41.349485 env[1309]: time="2024-02-09T19:02:41.349448091Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:41.350053 env[1309]: time="2024-02-09T19:02:41.350019992Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Feb 9 19:02:41.353812 env[1309]: time="2024-02-09T19:02:41.353774293Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 9 19:02:41.371363 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3369580583.mount: Deactivated successfully. Feb 9 19:02:41.385872 env[1309]: time="2024-02-09T19:02:41.385823607Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395\"" Feb 9 19:02:41.388689 env[1309]: time="2024-02-09T19:02:41.386572408Z" level=info msg="StartContainer for \"a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395\"" Feb 9 19:02:41.405480 systemd[1]: Started cri-containerd-a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395.scope. Feb 9 19:02:41.433687 systemd[1]: cri-containerd-a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395.scope: Deactivated successfully. Feb 9 19:02:41.436019 env[1309]: time="2024-02-09T19:02:41.435778529Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6e108d9e_a0b9_4a24_ae5a_41707cd660d0.slice/cri-containerd-a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395.scope/memory.events\": no such file or directory" Feb 9 19:02:41.440945 env[1309]: time="2024-02-09T19:02:41.440875031Z" level=info msg="StartContainer for \"a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395\" returns successfully" Feb 9 19:02:41.558026 env[1309]: time="2024-02-09T19:02:41.557960982Z" level=info msg="shim disconnected" id=a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395 Feb 9 19:02:41.558026 env[1309]: time="2024-02-09T19:02:41.558024982Z" level=warning msg="cleaning up after shim disconnected" id=a7e400cd20996847be3e20b9d26e25349c5598c3d701b271eeea25e4b6ca1395 namespace=k8s.io Feb 9 19:02:41.558375 env[1309]: time="2024-02-09T19:02:41.558041182Z" level=info msg="cleaning up dead shim" Feb 9 19:02:41.566864 env[1309]: time="2024-02-09T19:02:41.566819786Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2747 runtime=io.containerd.runc.v2\n" Feb 9 19:02:41.956636 env[1309]: time="2024-02-09T19:02:41.956481454Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Feb 9 19:02:43.903159 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2627830900.mount: Deactivated successfully. Feb 9 19:02:44.919889 env[1309]: time="2024-02-09T19:02:44.919837888Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:44.928295 env[1309]: time="2024-02-09T19:02:44.928225191Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:44.935427 env[1309]: time="2024-02-09T19:02:44.935376993Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/flannel/flannel:v0.22.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:44.940325 env[1309]: time="2024-02-09T19:02:44.940286495Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:02:44.940943 env[1309]: time="2024-02-09T19:02:44.940907695Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Feb 9 19:02:44.944208 env[1309]: time="2024-02-09T19:02:44.944127296Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 9 19:02:44.973747 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2032724663.mount: Deactivated successfully. Feb 9 19:02:44.987018 env[1309]: time="2024-02-09T19:02:44.986966912Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667\"" Feb 9 19:02:44.989470 env[1309]: time="2024-02-09T19:02:44.987706212Z" level=info msg="StartContainer for \"a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667\"" Feb 9 19:02:45.013533 systemd[1]: Started cri-containerd-a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667.scope. Feb 9 19:02:45.043006 systemd[1]: cri-containerd-a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667.scope: Deactivated successfully. Feb 9 19:02:45.047713 env[1309]: time="2024-02-09T19:02:45.047671232Z" level=info msg="StartContainer for \"a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667\" returns successfully" Feb 9 19:02:45.108046 kubelet[2402]: I0209 19:02:45.107718 2402 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:02:45.126959 kubelet[2402]: I0209 19:02:45.126918 2402 topology_manager.go:215] "Topology Admit Handler" podUID="4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f" podNamespace="kube-system" podName="coredns-5dd5756b68-78tdw" Feb 9 19:02:45.129320 kubelet[2402]: I0209 19:02:45.128713 2402 topology_manager.go:215] "Topology Admit Handler" podUID="ea0db11b-d6f4-44ed-b571-317a69b48273" podNamespace="kube-system" podName="coredns-5dd5756b68-ndxhv" Feb 9 19:02:45.133167 systemd[1]: Created slice kubepods-burstable-pod4e2f65ce_9427_4cce_bfa0_027dcc8ecc9f.slice. Feb 9 19:02:45.139597 systemd[1]: Created slice kubepods-burstable-podea0db11b_d6f4_44ed_b571_317a69b48273.slice. Feb 9 19:02:45.301794 kubelet[2402]: I0209 19:02:45.301727 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f-config-volume\") pod \"coredns-5dd5756b68-78tdw\" (UID: \"4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f\") " pod="kube-system/coredns-5dd5756b68-78tdw" Feb 9 19:02:45.302018 kubelet[2402]: I0209 19:02:45.301818 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ea0db11b-d6f4-44ed-b571-317a69b48273-config-volume\") pod \"coredns-5dd5756b68-ndxhv\" (UID: \"ea0db11b-d6f4-44ed-b571-317a69b48273\") " pod="kube-system/coredns-5dd5756b68-ndxhv" Feb 9 19:02:45.302018 kubelet[2402]: I0209 19:02:45.301869 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dp97x\" (UniqueName: \"kubernetes.io/projected/ea0db11b-d6f4-44ed-b571-317a69b48273-kube-api-access-dp97x\") pod \"coredns-5dd5756b68-ndxhv\" (UID: \"ea0db11b-d6f4-44ed-b571-317a69b48273\") " pod="kube-system/coredns-5dd5756b68-ndxhv" Feb 9 19:02:45.302018 kubelet[2402]: I0209 19:02:45.301908 2402 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5jtf\" (UniqueName: \"kubernetes.io/projected/4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f-kube-api-access-x5jtf\") pod \"coredns-5dd5756b68-78tdw\" (UID: \"4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f\") " pod="kube-system/coredns-5dd5756b68-78tdw" Feb 9 19:02:45.437458 env[1309]: time="2024-02-09T19:02:45.437402962Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-78tdw,Uid:4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:45.446630 env[1309]: time="2024-02-09T19:02:45.446586566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ndxhv,Uid:ea0db11b-d6f4-44ed-b571-317a69b48273,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:45.624725 env[1309]: time="2024-02-09T19:02:45.624264725Z" level=info msg="shim disconnected" id=a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667 Feb 9 19:02:45.624725 env[1309]: time="2024-02-09T19:02:45.624320925Z" level=warning msg="cleaning up after shim disconnected" id=a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667 namespace=k8s.io Feb 9 19:02:45.624725 env[1309]: time="2024-02-09T19:02:45.624332625Z" level=info msg="cleaning up dead shim" Feb 9 19:02:45.633370 env[1309]: time="2024-02-09T19:02:45.633322828Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:02:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2808 runtime=io.containerd.runc.v2\n" Feb 9 19:02:45.683590 env[1309]: time="2024-02-09T19:02:45.683508445Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ndxhv,Uid:ea0db11b-d6f4-44ed-b571-317a69b48273,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"bd0ebccbccd5d844e5bb3e33940fbd6c5dc74e511bea5852d5c53b41b0189e10\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:02:45.683849 kubelet[2402]: E0209 19:02:45.683815 2402 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0ebccbccd5d844e5bb3e33940fbd6c5dc74e511bea5852d5c53b41b0189e10\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:02:45.683947 kubelet[2402]: E0209 19:02:45.683876 2402 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0ebccbccd5d844e5bb3e33940fbd6c5dc74e511bea5852d5c53b41b0189e10\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-ndxhv" Feb 9 19:02:45.683947 kubelet[2402]: E0209 19:02:45.683904 2402 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd0ebccbccd5d844e5bb3e33940fbd6c5dc74e511bea5852d5c53b41b0189e10\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-ndxhv" Feb 9 19:02:45.684036 kubelet[2402]: E0209 19:02:45.683965 2402 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-ndxhv_kube-system(ea0db11b-d6f4-44ed-b571-317a69b48273)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-ndxhv_kube-system(ea0db11b-d6f4-44ed-b571-317a69b48273)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd0ebccbccd5d844e5bb3e33940fbd6c5dc74e511bea5852d5c53b41b0189e10\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-ndxhv" podUID="ea0db11b-d6f4-44ed-b571-317a69b48273" Feb 9 19:02:45.686552 env[1309]: time="2024-02-09T19:02:45.686497846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-78tdw,Uid:4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"414f59dfb9768c3e4283f8ac3c68ebee86c9e84f4d9ac7317978ca888a3043f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:02:45.686876 kubelet[2402]: E0209 19:02:45.686853 2402 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414f59dfb9768c3e4283f8ac3c68ebee86c9e84f4d9ac7317978ca888a3043f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Feb 9 19:02:45.687002 kubelet[2402]: E0209 19:02:45.686903 2402 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414f59dfb9768c3e4283f8ac3c68ebee86c9e84f4d9ac7317978ca888a3043f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-78tdw" Feb 9 19:02:45.687002 kubelet[2402]: E0209 19:02:45.686927 2402 kuberuntime_manager.go:1119] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"414f59dfb9768c3e4283f8ac3c68ebee86c9e84f4d9ac7317978ca888a3043f7\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-5dd5756b68-78tdw" Feb 9 19:02:45.687002 kubelet[2402]: E0209 19:02:45.686989 2402 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-5dd5756b68-78tdw_kube-system(4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-5dd5756b68-78tdw_kube-system(4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"414f59dfb9768c3e4283f8ac3c68ebee86c9e84f4d9ac7317978ca888a3043f7\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-5dd5756b68-78tdw" podUID="4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f" Feb 9 19:02:45.973983 systemd[1]: run-containerd-runc-k8s.io-a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667-runc.L8j4Vl.mount: Deactivated successfully. Feb 9 19:02:45.974101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a082e1a4cb22231626722aadbcc0be31d8d8347e90429048d92a2bf68cb8d667-rootfs.mount: Deactivated successfully. Feb 9 19:02:45.983179 env[1309]: time="2024-02-09T19:02:45.980418144Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 9 19:02:46.013322 env[1309]: time="2024-02-09T19:02:46.013269255Z" level=info msg="CreateContainer within sandbox \"1e6f3bd041423d08987b962c8ea41c7b55fdf8c023574bc1fb2e29dcfc5b2111\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"6433c89ba899e1e0665b0992495e2d199d6855980342037b94e583d2b0d89208\"" Feb 9 19:02:46.014182 env[1309]: time="2024-02-09T19:02:46.014143755Z" level=info msg="StartContainer for \"6433c89ba899e1e0665b0992495e2d199d6855980342037b94e583d2b0d89208\"" Feb 9 19:02:46.038103 systemd[1]: Started cri-containerd-6433c89ba899e1e0665b0992495e2d199d6855980342037b94e583d2b0d89208.scope. Feb 9 19:02:46.075523 env[1309]: time="2024-02-09T19:02:46.075468374Z" level=info msg="StartContainer for \"6433c89ba899e1e0665b0992495e2d199d6855980342037b94e583d2b0d89208\" returns successfully" Feb 9 19:02:46.971393 systemd[1]: run-containerd-runc-k8s.io-6433c89ba899e1e0665b0992495e2d199d6855980342037b94e583d2b0d89208-runc.MgLsNS.mount: Deactivated successfully. Feb 9 19:02:47.342229 systemd-networkd[1456]: flannel.1: Link UP Feb 9 19:02:47.342240 systemd-networkd[1456]: flannel.1: Gained carrier Feb 9 19:02:49.146004 systemd-networkd[1456]: flannel.1: Gained IPv6LL Feb 9 19:02:57.892709 env[1309]: time="2024-02-09T19:02:57.892638008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-78tdw,Uid:4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:57.946048 systemd-networkd[1456]: cni0: Link UP Feb 9 19:02:57.946058 systemd-networkd[1456]: cni0: Gained carrier Feb 9 19:02:57.949005 systemd-networkd[1456]: cni0: Lost carrier Feb 9 19:02:57.991988 systemd-networkd[1456]: veth1d4df010: Link UP Feb 9 19:02:57.998877 kernel: cni0: port 1(veth1d4df010) entered blocking state Feb 9 19:02:57.998985 kernel: cni0: port 1(veth1d4df010) entered disabled state Feb 9 19:02:58.002235 kernel: device veth1d4df010 entered promiscuous mode Feb 9 19:02:58.009604 kernel: cni0: port 1(veth1d4df010) entered blocking state Feb 9 19:02:58.009811 kernel: cni0: port 1(veth1d4df010) entered forwarding state Feb 9 19:02:58.009869 kernel: cni0: port 1(veth1d4df010) entered disabled state Feb 9 19:02:58.026333 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1d4df010: link becomes ready Feb 9 19:02:58.026450 kernel: cni0: port 1(veth1d4df010) entered blocking state Feb 9 19:02:58.026474 kernel: cni0: port 1(veth1d4df010) entered forwarding state Feb 9 19:02:58.026113 systemd-networkd[1456]: veth1d4df010: Gained carrier Feb 9 19:02:58.027301 systemd-networkd[1456]: cni0: Gained carrier Feb 9 19:02:58.029205 env[1309]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} Feb 9 19:02:58.029205 env[1309]: delegateAdd: netconf sent to delegate plugin: Feb 9 19:02:58.048367 env[1309]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T19:02:58.048002203Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:02:58.048367 env[1309]: time="2024-02-09T19:02:58.048044404Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:02:58.048367 env[1309]: time="2024-02-09T19:02:58.048059604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:02:58.048367 env[1309]: time="2024-02-09T19:02:58.048196206Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/beeffc4717f64625b1db12a69da931d744c4fd7ffaefbf1ef2357c6abff5e9c9 pid=3075 runtime=io.containerd.runc.v2 Feb 9 19:02:58.074302 systemd[1]: Started cri-containerd-beeffc4717f64625b1db12a69da931d744c4fd7ffaefbf1ef2357c6abff5e9c9.scope. Feb 9 19:02:58.115088 env[1309]: time="2024-02-09T19:02:58.115042090Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-78tdw,Uid:4e2f65ce-9427-4cce-bfa0-027dcc8ecc9f,Namespace:kube-system,Attempt:0,} returns sandbox id \"beeffc4717f64625b1db12a69da931d744c4fd7ffaefbf1ef2357c6abff5e9c9\"" Feb 9 19:02:58.119688 env[1309]: time="2024-02-09T19:02:58.119352747Z" level=info msg="CreateContainer within sandbox \"beeffc4717f64625b1db12a69da931d744c4fd7ffaefbf1ef2357c6abff5e9c9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:02:58.153487 env[1309]: time="2024-02-09T19:02:58.153383598Z" level=info msg="CreateContainer within sandbox \"beeffc4717f64625b1db12a69da931d744c4fd7ffaefbf1ef2357c6abff5e9c9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"46ad85b4b982a78039bb0fc5739c80939cbfbab00253c63257b828e4606c4584\"" Feb 9 19:02:58.155936 env[1309]: time="2024-02-09T19:02:58.155895831Z" level=info msg="StartContainer for \"46ad85b4b982a78039bb0fc5739c80939cbfbab00253c63257b828e4606c4584\"" Feb 9 19:02:58.172616 systemd[1]: Started cri-containerd-46ad85b4b982a78039bb0fc5739c80939cbfbab00253c63257b828e4606c4584.scope. Feb 9 19:02:58.207421 env[1309]: time="2024-02-09T19:02:58.207366412Z" level=info msg="StartContainer for \"46ad85b4b982a78039bb0fc5739c80939cbfbab00253c63257b828e4606c4584\" returns successfully" Feb 9 19:02:59.012116 kubelet[2402]: I0209 19:02:59.012080 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-55qfw" podStartSLOduration=16.354392884 podCreationTimestamp="2024-02-09 19:02:37 +0000 UTC" firstStartedPulling="2024-02-09 19:02:39.283615925 +0000 UTC m=+16.340356577" lastFinishedPulling="2024-02-09 19:02:44.941260595 +0000 UTC m=+21.998001347" observedRunningTime="2024-02-09 19:02:46.988179712 +0000 UTC m=+24.044920364" watchObservedRunningTime="2024-02-09 19:02:59.012037654 +0000 UTC m=+36.068778306" Feb 9 19:02:59.012793 kubelet[2402]: I0209 19:02:59.012770 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-78tdw" podStartSLOduration=22.012722663 podCreationTimestamp="2024-02-09 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:02:59.011649449 +0000 UTC m=+36.068390101" watchObservedRunningTime="2024-02-09 19:02:59.012722663 +0000 UTC m=+36.069463315" Feb 9 19:02:59.129985 systemd-networkd[1456]: cni0: Gained IPv6LL Feb 9 19:02:59.449930 systemd-networkd[1456]: veth1d4df010: Gained IPv6LL Feb 9 19:02:59.892940 env[1309]: time="2024-02-09T19:02:59.892875899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ndxhv,Uid:ea0db11b-d6f4-44ed-b571-317a69b48273,Namespace:kube-system,Attempt:0,}" Feb 9 19:02:59.953738 systemd-networkd[1456]: veth1c12cb5d: Link UP Feb 9 19:02:59.961473 kernel: cni0: port 2(veth1c12cb5d) entered blocking state Feb 9 19:02:59.961571 kernel: cni0: port 2(veth1c12cb5d) entered disabled state Feb 9 19:02:59.961601 kernel: device veth1c12cb5d entered promiscuous mode Feb 9 19:02:59.979780 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:02:59.979938 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth1c12cb5d: link becomes ready Feb 9 19:02:59.979969 kernel: cni0: port 2(veth1c12cb5d) entered blocking state Feb 9 19:02:59.984106 kernel: cni0: port 2(veth1c12cb5d) entered forwarding state Feb 9 19:02:59.984222 systemd-networkd[1456]: veth1c12cb5d: Gained carrier Feb 9 19:02:59.987347 env[1309]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000b08e8), "name":"cbr0", "type":"bridge"} Feb 9 19:02:59.987347 env[1309]: delegateAdd: netconf sent to delegate plugin: Feb 9 19:03:00.004506 env[1309]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-09T19:03:00.003940329Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:00.004506 env[1309]: time="2024-02-09T19:03:00.004014830Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:00.004506 env[1309]: time="2024-02-09T19:03:00.004031630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:00.004506 env[1309]: time="2024-02-09T19:03:00.004195732Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d pid=3188 runtime=io.containerd.runc.v2 Feb 9 19:03:00.029658 systemd[1]: Started cri-containerd-865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d.scope. Feb 9 19:03:00.084007 env[1309]: time="2024-02-09T19:03:00.083962332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5dd5756b68-ndxhv,Uid:ea0db11b-d6f4-44ed-b571-317a69b48273,Namespace:kube-system,Attempt:0,} returns sandbox id \"865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d\"" Feb 9 19:03:00.087871 env[1309]: time="2024-02-09T19:03:00.087832181Z" level=info msg="CreateContainer within sandbox \"865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:03:00.122195 env[1309]: time="2024-02-09T19:03:00.122140511Z" level=info msg="CreateContainer within sandbox \"865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"5389207322381e32ba1fc2cb0c30a7b8f860ea7fed15d9b93608f434969a772b\"" Feb 9 19:03:00.124396 env[1309]: time="2024-02-09T19:03:00.122921921Z" level=info msg="StartContainer for \"5389207322381e32ba1fc2cb0c30a7b8f860ea7fed15d9b93608f434969a772b\"" Feb 9 19:03:00.140488 systemd[1]: Started cri-containerd-5389207322381e32ba1fc2cb0c30a7b8f860ea7fed15d9b93608f434969a772b.scope. Feb 9 19:03:00.172909 env[1309]: time="2024-02-09T19:03:00.171993836Z" level=info msg="StartContainer for \"5389207322381e32ba1fc2cb0c30a7b8f860ea7fed15d9b93608f434969a772b\" returns successfully" Feb 9 19:03:00.926992 systemd[1]: run-containerd-runc-k8s.io-865777b317e8cc1bcdf0fbf0898e90b2d62ea7302448c8e53fa1063014a8839d-runc.ZyoW8M.mount: Deactivated successfully. Feb 9 19:03:01.016871 kubelet[2402]: I0209 19:03:01.016834 2402 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5dd5756b68-ndxhv" podStartSLOduration=24.016796224 podCreationTimestamp="2024-02-09 19:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:01.015979214 +0000 UTC m=+38.072719966" watchObservedRunningTime="2024-02-09 19:03:01.016796224 +0000 UTC m=+38.073536876" Feb 9 19:03:01.753923 systemd-networkd[1456]: veth1c12cb5d: Gained IPv6LL Feb 9 19:03:53.852703 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.12.6:50664.service. Feb 9 19:03:54.471240 sshd[3505]: Accepted publickey for core from 10.200.12.6 port 50664 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:54.472664 sshd[3505]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:54.477966 systemd[1]: Started session-8.scope. Feb 9 19:03:54.478598 systemd-logind[1295]: New session 8 of user core. Feb 9 19:03:54.972850 sshd[3505]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:54.976190 systemd[1]: sshd@5-10.200.8.10:22-10.200.12.6:50664.service: Deactivated successfully. Feb 9 19:03:54.977405 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:03:54.978304 systemd-logind[1295]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:03:54.979120 systemd-logind[1295]: Removed session 8. Feb 9 19:04:00.083636 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.12.6:38340.service. Feb 9 19:04:00.703598 sshd[3539]: Accepted publickey for core from 10.200.12.6 port 38340 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:00.704964 sshd[3539]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:00.710059 systemd[1]: Started session-9.scope. Feb 9 19:04:00.710647 systemd-logind[1295]: New session 9 of user core. Feb 9 19:04:01.194829 sshd[3539]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:01.198917 systemd[1]: sshd@6-10.200.8.10:22-10.200.12.6:38340.service: Deactivated successfully. Feb 9 19:04:01.199819 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:04:01.200501 systemd-logind[1295]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:04:01.201301 systemd-logind[1295]: Removed session 9. Feb 9 19:04:06.301816 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.12.6:38342.service. Feb 9 19:04:06.927047 sshd[3573]: Accepted publickey for core from 10.200.12.6 port 38342 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:06.928490 sshd[3573]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:06.933393 systemd[1]: Started session-10.scope. Feb 9 19:04:06.934037 systemd-logind[1295]: New session 10 of user core. Feb 9 19:04:07.422319 sshd[3573]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:07.425600 systemd[1]: sshd@7-10.200.8.10:22-10.200.12.6:38342.service: Deactivated successfully. Feb 9 19:04:07.426555 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:04:07.427237 systemd-logind[1295]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:04:07.428113 systemd-logind[1295]: Removed session 10. Feb 9 19:04:07.526874 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.12.6:34042.service. Feb 9 19:04:08.141343 sshd[3586]: Accepted publickey for core from 10.200.12.6 port 34042 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:08.143039 sshd[3586]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:08.149488 systemd-logind[1295]: New session 11 of user core. Feb 9 19:04:08.150345 systemd[1]: Started session-11.scope. Feb 9 19:04:08.739734 sshd[3586]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:08.743650 systemd[1]: sshd@8-10.200.8.10:22-10.200.12.6:34042.service: Deactivated successfully. Feb 9 19:04:08.744543 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:04:08.745384 systemd-logind[1295]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:04:08.746154 systemd-logind[1295]: Removed session 11. Feb 9 19:04:08.845808 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.12.6:34048.service. Feb 9 19:04:09.490800 sshd[3617]: Accepted publickey for core from 10.200.12.6 port 34048 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:09.492195 sshd[3617]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:09.497614 systemd-logind[1295]: New session 12 of user core. Feb 9 19:04:09.498120 systemd[1]: Started session-12.scope. Feb 9 19:04:09.984237 sshd[3617]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:09.987738 systemd[1]: sshd@9-10.200.8.10:22-10.200.12.6:34048.service: Deactivated successfully. Feb 9 19:04:09.988676 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:04:09.989286 systemd-logind[1295]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:04:09.990170 systemd-logind[1295]: Removed session 12. Feb 9 19:04:15.090722 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.12.6:34050.service. Feb 9 19:04:15.707132 sshd[3653]: Accepted publickey for core from 10.200.12.6 port 34050 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:15.708545 sshd[3653]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:15.713533 systemd[1]: Started session-13.scope. Feb 9 19:04:15.714019 systemd-logind[1295]: New session 13 of user core. Feb 9 19:04:16.193965 sshd[3653]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:16.197785 systemd[1]: sshd@10-10.200.8.10:22-10.200.12.6:34050.service: Deactivated successfully. Feb 9 19:04:16.198917 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:04:16.199746 systemd-logind[1295]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:04:16.200706 systemd-logind[1295]: Removed session 13. Feb 9 19:04:16.298869 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.12.6:34062.service. Feb 9 19:04:16.919782 sshd[3665]: Accepted publickey for core from 10.200.12.6 port 34062 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:16.921262 sshd[3665]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:16.926816 systemd-logind[1295]: New session 14 of user core. Feb 9 19:04:16.927988 systemd[1]: Started session-14.scope. Feb 9 19:04:17.478505 sshd[3665]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:17.482081 systemd[1]: sshd@11-10.200.8.10:22-10.200.12.6:34062.service: Deactivated successfully. Feb 9 19:04:17.482989 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:04:17.483792 systemd-logind[1295]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:04:17.484674 systemd-logind[1295]: Removed session 14. Feb 9 19:04:17.581686 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.12.6:41370.service. Feb 9 19:04:18.197305 sshd[3681]: Accepted publickey for core from 10.200.12.6 port 41370 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:18.198787 sshd[3681]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:18.203701 systemd[1]: Started session-15.scope. Feb 9 19:04:18.204310 systemd-logind[1295]: New session 15 of user core. Feb 9 19:04:19.647341 sshd[3681]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:19.651165 systemd[1]: sshd@12-10.200.8.10:22-10.200.12.6:41370.service: Deactivated successfully. Feb 9 19:04:19.652299 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:04:19.653242 systemd-logind[1295]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:04:19.654366 systemd-logind[1295]: Removed session 15. Feb 9 19:04:19.752014 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.12.6:41376.service. Feb 9 19:04:20.368095 sshd[3713]: Accepted publickey for core from 10.200.12.6 port 41376 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:20.369721 sshd[3713]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:20.374893 systemd[1]: Started session-16.scope. Feb 9 19:04:20.375506 systemd-logind[1295]: New session 16 of user core. Feb 9 19:04:21.031988 sshd[3713]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:21.035199 systemd[1]: sshd@13-10.200.8.10:22-10.200.12.6:41376.service: Deactivated successfully. Feb 9 19:04:21.036453 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:04:21.036506 systemd-logind[1295]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:04:21.037693 systemd-logind[1295]: Removed session 16. Feb 9 19:04:21.136523 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.12.6:41388.service. Feb 9 19:04:21.752947 sshd[3723]: Accepted publickey for core from 10.200.12.6 port 41388 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:21.754503 sshd[3723]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:21.759782 systemd[1]: Started session-17.scope. Feb 9 19:04:21.760266 systemd-logind[1295]: New session 17 of user core. Feb 9 19:04:22.242571 sshd[3723]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:22.245848 systemd[1]: sshd@14-10.200.8.10:22-10.200.12.6:41388.service: Deactivated successfully. Feb 9 19:04:22.247260 systemd-logind[1295]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:04:22.247327 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:04:22.248611 systemd-logind[1295]: Removed session 17. Feb 9 19:04:27.350695 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.12.6:45808.service. Feb 9 19:04:27.971443 sshd[3760]: Accepted publickey for core from 10.200.12.6 port 45808 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:27.972895 sshd[3760]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:27.977718 systemd-logind[1295]: New session 18 of user core. Feb 9 19:04:27.978593 systemd[1]: Started session-18.scope. Feb 9 19:04:28.462075 sshd[3760]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:28.465427 systemd[1]: sshd@15-10.200.8.10:22-10.200.12.6:45808.service: Deactivated successfully. Feb 9 19:04:28.466561 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:04:28.467506 systemd-logind[1295]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:04:28.468419 systemd-logind[1295]: Removed session 18. Feb 9 19:04:33.567954 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.12.6:45818.service. Feb 9 19:04:34.216826 sshd[3814]: Accepted publickey for core from 10.200.12.6 port 45818 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:34.218337 sshd[3814]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:34.223363 systemd[1]: Started session-19.scope. Feb 9 19:04:34.224059 systemd-logind[1295]: New session 19 of user core. Feb 9 19:04:34.705707 sshd[3814]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:34.708904 systemd[1]: sshd@16-10.200.8.10:22-10.200.12.6:45818.service: Deactivated successfully. Feb 9 19:04:34.710062 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:04:34.710948 systemd-logind[1295]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:04:34.711710 systemd-logind[1295]: Removed session 19. Feb 9 19:04:39.810630 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.12.6:42754.service. Feb 9 19:04:40.435321 sshd[3848]: Accepted publickey for core from 10.200.12.6 port 42754 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:04:40.437011 sshd[3848]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:40.442029 systemd-logind[1295]: New session 20 of user core. Feb 9 19:04:40.442525 systemd[1]: Started session-20.scope. Feb 9 19:04:40.922627 sshd[3848]: pam_unix(sshd:session): session closed for user core Feb 9 19:04:40.926104 systemd[1]: sshd@17-10.200.8.10:22-10.200.12.6:42754.service: Deactivated successfully. Feb 9 19:04:40.927233 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:04:40.928244 systemd-logind[1295]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:04:40.929222 systemd-logind[1295]: Removed session 20. Feb 9 19:04:56.138117 systemd[1]: cri-containerd-787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0.scope: Deactivated successfully. Feb 9 19:04:56.138444 systemd[1]: cri-containerd-787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0.scope: Consumed 2.323s CPU time. Feb 9 19:04:56.159370 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0-rootfs.mount: Deactivated successfully. Feb 9 19:04:56.172445 env[1309]: time="2024-02-09T19:04:56.172393896Z" level=info msg="shim disconnected" id=787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0 Feb 9 19:04:56.172953 env[1309]: time="2024-02-09T19:04:56.172444596Z" level=warning msg="cleaning up after shim disconnected" id=787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0 namespace=k8s.io Feb 9 19:04:56.172953 env[1309]: time="2024-02-09T19:04:56.172461597Z" level=info msg="cleaning up dead shim" Feb 9 19:04:56.181433 env[1309]: time="2024-02-09T19:04:56.181390436Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3935 runtime=io.containerd.runc.v2\n" Feb 9 19:04:56.250106 kubelet[2402]: I0209 19:04:56.249528 2402 scope.go:117] "RemoveContainer" containerID="787525e3cdab18bf3bc3d07d5507416652d41c98d21b2bb162d64dc7fffb01d0" Feb 9 19:04:56.252285 env[1309]: time="2024-02-09T19:04:56.252237043Z" level=info msg="CreateContainer within sandbox \"ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 9 19:04:56.281450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3349710730.mount: Deactivated successfully. Feb 9 19:04:56.297045 env[1309]: time="2024-02-09T19:04:56.297003342Z" level=info msg="CreateContainer within sandbox \"ca756f50e7042daa612fe0c4566b2db8631352e030f10032bfd7bf560020565a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"6b9960c04131e17cc04ae1f1202df4e651b568b19a4f5e30badd8e3c65be70ee\"" Feb 9 19:04:56.297566 env[1309]: time="2024-02-09T19:04:56.297531650Z" level=info msg="StartContainer for \"6b9960c04131e17cc04ae1f1202df4e651b568b19a4f5e30badd8e3c65be70ee\"" Feb 9 19:04:56.318058 systemd[1]: Started cri-containerd-6b9960c04131e17cc04ae1f1202df4e651b568b19a4f5e30badd8e3c65be70ee.scope. Feb 9 19:04:56.370558 env[1309]: time="2024-02-09T19:04:56.370505190Z" level=info msg="StartContainer for \"6b9960c04131e17cc04ae1f1202df4e651b568b19a4f5e30badd8e3c65be70ee\" returns successfully" Feb 9 19:04:56.499715 kubelet[2402]: E0209 19:04:56.499369 2402 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:04:59.569546 systemd[1]: cri-containerd-5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b.scope: Deactivated successfully. Feb 9 19:04:59.569868 systemd[1]: cri-containerd-5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b.scope: Consumed 1.176s CPU time. Feb 9 19:04:59.590103 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b-rootfs.mount: Deactivated successfully. Feb 9 19:04:59.615726 env[1309]: time="2024-02-09T19:04:59.615677378Z" level=info msg="shim disconnected" id=5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b Feb 9 19:04:59.615726 env[1309]: time="2024-02-09T19:04:59.615722979Z" level=warning msg="cleaning up after shim disconnected" id=5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b namespace=k8s.io Feb 9 19:04:59.615726 env[1309]: time="2024-02-09T19:04:59.615734379Z" level=info msg="cleaning up dead shim" Feb 9 19:04:59.624062 env[1309]: time="2024-02-09T19:04:59.624011004Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:59Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4016 runtime=io.containerd.runc.v2\n" Feb 9 19:05:00.094607 kubelet[2402]: E0209 19:05:00.094565 2402 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:37422->10.200.8.21:2379: read: connection timed out" Feb 9 19:05:00.261361 kubelet[2402]: I0209 19:05:00.261327 2402 scope.go:117] "RemoveContainer" containerID="5527a61209ac8a1ff1b1a96312881d7dac43a62f0941438595b3d36b8b988b5b" Feb 9 19:05:00.263611 env[1309]: time="2024-02-09T19:05:00.263562990Z" level=info msg="CreateContainer within sandbox \"73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 9 19:05:00.294424 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1884161571.mount: Deactivated successfully. Feb 9 19:05:00.309883 env[1309]: time="2024-02-09T19:05:00.309831879Z" level=info msg="CreateContainer within sandbox \"73c9d12943ee948f2cb5cbcce5caa596bff05adc6d4c85f649b3210570565209\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0c3329f4f2ad7ebe7c0e8fe175ac28553d107bbbb9dbfb338f96fd8c82b9c7bf\"" Feb 9 19:05:00.310389 env[1309]: time="2024-02-09T19:05:00.310355187Z" level=info msg="StartContainer for \"0c3329f4f2ad7ebe7c0e8fe175ac28553d107bbbb9dbfb338f96fd8c82b9c7bf\"" Feb 9 19:05:00.328584 systemd[1]: Started cri-containerd-0c3329f4f2ad7ebe7c0e8fe175ac28553d107bbbb9dbfb338f96fd8c82b9c7bf.scope. Feb 9 19:05:00.390171 env[1309]: time="2024-02-09T19:05:00.389722468Z" level=info msg="StartContainer for \"0c3329f4f2ad7ebe7c0e8fe175ac28553d107bbbb9dbfb338f96fd8c82b9c7bf\" returns successfully" Feb 9 19:05:06.170882 kubelet[2402]: I0209 19:05:06.170837 2402 status_manager.go:853] "Failed to get status for pod" podUID="43b18734350b624d63fd3a02b2eaee96" pod="kube-system/kube-apiserver-ci-3510.3.2-a-075ad2fc80" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:37348->10.200.8.21:2379: read: connection timed out" Feb 9 19:05:10.096109 kubelet[2402]: E0209 19:05:10.096060 2402 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-075ad2fc80?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:05:15.328288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.328638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.337121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.337376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.346421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.346687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.355260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.355515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.364428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.364669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.373464 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.373716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.390532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.390806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.399060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.399283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.409066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.409297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.418366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.418595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.427904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.428146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.437223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.437443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.460043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.460365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.460511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.469439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.469719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.478908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.479166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.488427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.488665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.497681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.497948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.507215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.520594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.520861 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.529597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.529841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.539231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.539471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.548651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.548906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.557916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.558145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.567527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.567801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.585450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.585706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.590088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.594557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.599425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.603959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.608458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.612977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.617520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.621974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.626447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.631332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.649059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.649296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.658198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.658411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.667468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.667674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.676669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.676896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.685771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.685986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.690441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.695197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.705335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.705556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.714376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.714598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.723531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.723767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.727865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.732444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.741420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.741645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.750589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.750823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.764459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.764683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.764844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.773494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.773743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.782790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.783012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.792037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.792252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.801537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.801764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.810415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.815669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.815911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.824873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.825105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.833970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.834188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.838308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.842986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.847363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.852133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.856687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.865878 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.871274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.871471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.880362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.880574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.889328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.889536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.894465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.902973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.903196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.911927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.912131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.916422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.930329 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.930559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.930699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.939243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.939461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.948287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.948504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.957429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.957644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.966426 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.966649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.975360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.981280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.981497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.990208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.990418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.999315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:15.999524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.008745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.008957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.017977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.018204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.027669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.027931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.037418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.037635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.046555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.046782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.050953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.056016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.065300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.065523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.074421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.074625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.083471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.083696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.093560 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.093792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.098204 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.103066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.112172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.112415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.116833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.126104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.126323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.135403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.135611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.139824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.150096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.150324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.159028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.159250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.168245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.168484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.177693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.177932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.186285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.191205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.191423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.200573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.256166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.256314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.256452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.256585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.256722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.257982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.265198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.265459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.274111 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.274353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.283373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.283635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.292634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.292910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.301599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.301868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.310741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.320397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.320663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.320822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.329551 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.329834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.338630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.338900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.348007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.348282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.357068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.357340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.366422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.376874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.377139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.377287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.381587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.390394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.390640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.399745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.399994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.409207 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.409455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.418664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.418915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.433395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.433714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.433900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.437781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.442465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.451959 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.452259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.461344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.461629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.470799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.471073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.480172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.485670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.485999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.495174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.495475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.504814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.505107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.514487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.514793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.519377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.524106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.528689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.537948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.538252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.542892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.552174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.552439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.561798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.562066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.570895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.571172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.580285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.580576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.589847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.590147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.596016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.605800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.606085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.615406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.615677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.625508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.625793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.635255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.635518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.644870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.645157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.655122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.665546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.665929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.666086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.675331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.675621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.685335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.685637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.694735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.695034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.699263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.704060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.713519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.723836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.724133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.724292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.733276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.733572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.742470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.742768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.752105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.752400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.761577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.761882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.771292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.774443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.776462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.785730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.785995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.795254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.795506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.804340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.804566 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.813846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.814082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.823082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.823337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.833883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.834176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.843148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.843468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.852469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.852728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.861950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.862230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.871050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.871311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.880400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.880673 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.890807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.891121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.900143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.900425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.909427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.909741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.919775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.920073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.929057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.929362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.938127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.938412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.952430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.952743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.952897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.961546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.961868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.971145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.971430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.975915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.981326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.989978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.990258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:16.999465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.003970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.009679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.009923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.018936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.019228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.029334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.029575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.039317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.039573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.049138 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.049396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.058813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.069732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.070082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.070233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.078848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.079140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.088006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.088265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.097823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.098101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.107024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.107273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.116948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.122633 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.122928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.132281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.132537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.141691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.141939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.151166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.151419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.160241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.160503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.169251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.169492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.179738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.180005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.188906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.189212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.198581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.198900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.207970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.208272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.217159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.217406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.226379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.226638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.236384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.236664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.245945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.246221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.255314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.255569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.264467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.264723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.273924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.274177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.283652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.283909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.289049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.298593 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.298858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.307358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.307594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.316940 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.317168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.326098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.326316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.335074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.335295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.344356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.354846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.355093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.355240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.364265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.364502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#161 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.373344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#159 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.373581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#162 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.382807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#158 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.383053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.391971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#151 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.392203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.401149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.406203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.406432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.415430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#152 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:05:17.415670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#160 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001