Feb 8 23:39:59.011978 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:39:59.012010 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:39:59.012025 kernel: BIOS-provided physical RAM map: Feb 8 23:39:59.012035 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:39:59.012045 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:39:59.012055 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:39:59.012070 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:39:59.012082 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:39:59.012092 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:39:59.012103 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:39:59.012114 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:39:59.012124 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:39:59.012135 kernel: NX (Execute Disable) protection: active Feb 8 23:39:59.012146 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:39:59.012161 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:39:59.012173 kernel: random: crng init done Feb 8 23:39:59.012185 kernel: SMBIOS 3.1.0 present. Feb 8 23:39:59.012197 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:39:59.012207 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:39:59.012220 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:39:59.012231 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:39:59.012242 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:39:59.012309 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:39:59.012320 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:39:59.012332 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:39:59.012344 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:39:59.012356 kernel: tsc: Detected 2593.906 MHz processor Feb 8 23:39:59.012369 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:39:59.012381 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:39:59.012393 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:39:59.012405 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:39:59.012417 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:39:59.012431 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:39:59.012443 kernel: Using GB pages for direct mapping Feb 8 23:39:59.012455 kernel: Secure boot disabled Feb 8 23:39:59.012467 kernel: ACPI: Early table checksum verification disabled Feb 8 23:39:59.012477 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:39:59.012488 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012500 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012512 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:39:59.012531 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:39:59.012544 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012557 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012569 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012582 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012595 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012610 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012622 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:39:59.012635 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:39:59.012648 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:39:59.012661 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:39:59.012673 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:39:59.012686 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:39:59.012698 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:39:59.012714 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:39:59.012726 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:39:59.012739 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:39:59.012752 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:39:59.012764 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:39:59.012777 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:39:59.012790 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:39:59.012803 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:39:59.012816 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:39:59.012830 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:39:59.012843 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:39:59.012856 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:39:59.012868 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:39:59.012881 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:39:59.012893 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:39:59.012906 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:39:59.012919 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:39:59.012932 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:39:59.012948 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:39:59.012960 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:39:59.012973 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:39:59.012986 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:39:59.012999 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:39:59.013011 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:39:59.013024 kernel: Zone ranges: Feb 8 23:39:59.013036 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:39:59.013047 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:39:59.013062 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:39:59.013075 kernel: Movable zone start for each node Feb 8 23:39:59.013087 kernel: Early memory node ranges Feb 8 23:39:59.013101 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:39:59.013113 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:39:59.013126 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:39:59.013139 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:39:59.013152 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:39:59.013165 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:39:59.013180 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:39:59.013192 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:39:59.013205 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:39:59.013218 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:39:59.013231 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:39:59.013260 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:39:59.013273 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:39:59.013286 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:39:59.013299 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:39:59.013314 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:39:59.013327 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:39:59.013341 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:39:59.013354 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:39:59.013366 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:39:59.013380 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:39:59.013392 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:39:59.013404 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:39:59.013417 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:39:59.013432 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:39:59.013444 kernel: Policy zone: Normal Feb 8 23:39:59.013459 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:39:59.013472 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:39:59.013485 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:39:59.013497 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:39:59.013510 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:39:59.013523 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:39:59.013538 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:39:59.013551 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:39:59.013572 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:39:59.013588 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:39:59.013602 kernel: rcu: RCU event tracing is enabled. Feb 8 23:39:59.013616 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:39:59.013629 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:39:59.013643 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:39:59.013656 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:39:59.013669 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:39:59.013682 kernel: Using NULL legacy PIC Feb 8 23:39:59.013698 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:39:59.013712 kernel: Console: colour dummy device 80x25 Feb 8 23:39:59.013725 kernel: printk: console [tty1] enabled Feb 8 23:39:59.013739 kernel: printk: console [ttyS0] enabled Feb 8 23:39:59.013752 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:39:59.013767 kernel: ACPI: Core revision 20210730 Feb 8 23:39:59.013781 kernel: Failed to register legacy timer interrupt Feb 8 23:39:59.013794 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:39:59.013808 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:39:59.013821 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 8 23:39:59.013835 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:39:59.013848 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:39:59.013861 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:39:59.013874 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:39:59.013887 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:39:59.013903 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:39:59.013916 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:39:59.013930 kernel: RETBleed: Vulnerable Feb 8 23:39:59.013943 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:39:59.013956 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:39:59.013970 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:39:59.013983 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:39:59.013996 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:39:59.014010 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:39:59.014023 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:39:59.014039 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:39:59.014053 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:39:59.014066 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:39:59.014079 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:39:59.014092 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:39:59.014106 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:39:59.014119 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:39:59.014133 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:39:59.014146 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:39:59.014159 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:39:59.014172 kernel: LSM: Security Framework initializing Feb 8 23:39:59.014186 kernel: SELinux: Initializing. Feb 8 23:39:59.014202 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:39:59.014216 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:39:59.014230 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:39:59.014256 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:39:59.014269 kernel: signal: max sigframe size: 3632 Feb 8 23:39:59.014281 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:39:59.014294 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:39:59.014305 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:39:59.014318 kernel: x86: Booting SMP configuration: Feb 8 23:39:59.014331 kernel: .... node #0, CPUs: #1 Feb 8 23:39:59.014347 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:39:59.014360 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:39:59.014371 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:39:59.014379 kernel: smpboot: Max logical packages: 1 Feb 8 23:39:59.014386 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:39:59.014394 kernel: devtmpfs: initialized Feb 8 23:39:59.014402 kernel: x86/mm: Memory block size: 128MB Feb 8 23:39:59.014413 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:39:59.014423 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:39:59.014434 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:39:59.014442 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:39:59.014449 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:39:59.014456 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:39:59.014464 kernel: audit: type=2000 audit(1707435598.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:39:59.014471 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:39:59.014478 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:39:59.014485 kernel: cpuidle: using governor menu Feb 8 23:39:59.014495 kernel: ACPI: bus type PCI registered Feb 8 23:39:59.014502 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:39:59.014509 kernel: dca service started, version 1.12.1 Feb 8 23:39:59.014516 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:39:59.014523 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:39:59.014532 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:39:59.014541 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:39:59.014548 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:39:59.014555 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:39:59.014566 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:39:59.014575 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:39:59.014583 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:39:59.014591 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:39:59.014598 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:39:59.014605 kernel: ACPI: Interpreter enabled Feb 8 23:39:59.014612 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:39:59.014619 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:39:59.014627 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:39:59.014636 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:39:59.014643 kernel: iommu: Default domain type: Translated Feb 8 23:39:59.014650 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:39:59.014658 kernel: vgaarb: loaded Feb 8 23:39:59.014667 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:39:59.014676 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:39:59.014683 kernel: PTP clock support registered Feb 8 23:39:59.014690 kernel: Registered efivars operations Feb 8 23:39:59.014697 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:39:59.014705 kernel: PCI: System does not support PCI Feb 8 23:39:59.014720 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:39:59.014729 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:39:59.014736 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:39:59.014744 kernel: pnp: PnP ACPI init Feb 8 23:39:59.014751 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:39:59.014758 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:39:59.014768 kernel: NET: Registered PF_INET protocol family Feb 8 23:39:59.014776 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:39:59.014785 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:39:59.014794 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:39:59.014803 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:39:59.014811 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:39:59.014821 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:39:59.014828 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:39:59.014836 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:39:59.014845 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:39:59.014853 kernel: NET: Registered PF_XDP protocol family Feb 8 23:39:59.014866 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:39:59.014874 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:39:59.014881 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:39:59.014890 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:39:59.014899 kernel: Initialise system trusted keyrings Feb 8 23:39:59.014908 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:39:59.014916 kernel: Key type asymmetric registered Feb 8 23:39:59.014923 kernel: Asymmetric key parser 'x509' registered Feb 8 23:39:59.014931 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:39:59.014942 kernel: io scheduler mq-deadline registered Feb 8 23:39:59.014950 kernel: io scheduler kyber registered Feb 8 23:39:59.014960 kernel: io scheduler bfq registered Feb 8 23:39:59.014967 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:39:59.014975 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:39:59.014984 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:39:59.014993 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:39:59.015003 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:39:59.015133 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:39:59.015222 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:39:58 UTC (1707435598) Feb 8 23:39:59.015319 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:39:59.015332 kernel: fail to initialize ptp_kvm Feb 8 23:39:59.015339 kernel: intel_pstate: CPU model not supported Feb 8 23:39:59.015347 kernel: efifb: probing for efifb Feb 8 23:39:59.015358 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:39:59.015366 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:39:59.015376 kernel: efifb: scrolling: redraw Feb 8 23:39:59.015386 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:39:59.015393 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:39:59.015403 kernel: fb0: EFI VGA frame buffer device Feb 8 23:39:59.015412 kernel: pstore: Registered efi as persistent store backend Feb 8 23:39:59.015421 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:39:59.015429 kernel: Segment Routing with IPv6 Feb 8 23:39:59.015436 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:39:59.015446 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:39:59.015454 kernel: Key type dns_resolver registered Feb 8 23:39:59.015466 kernel: IPI shorthand broadcast: enabled Feb 8 23:39:59.015474 kernel: sched_clock: Marking stable (760718200, 22969200)->(984441300, -200753900) Feb 8 23:39:59.015481 kernel: registered taskstats version 1 Feb 8 23:39:59.015490 kernel: Loading compiled-in X.509 certificates Feb 8 23:39:59.015498 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:39:59.015509 kernel: Key type .fscrypt registered Feb 8 23:39:59.015516 kernel: Key type fscrypt-provisioning registered Feb 8 23:39:59.015523 kernel: pstore: Using crash dump compression: deflate Feb 8 23:39:59.015534 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:39:59.015543 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:39:59.015553 kernel: ima: No architecture policies found Feb 8 23:39:59.015561 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:39:59.015568 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:39:59.015576 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:39:59.015586 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:39:59.015594 kernel: Run /init as init process Feb 8 23:39:59.015604 kernel: with arguments: Feb 8 23:39:59.015611 kernel: /init Feb 8 23:39:59.015621 kernel: with environment: Feb 8 23:39:59.015630 kernel: HOME=/ Feb 8 23:39:59.015638 kernel: TERM=linux Feb 8 23:39:59.015645 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:39:59.015654 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:39:59.015667 systemd[1]: Detected virtualization microsoft. Feb 8 23:39:59.015675 systemd[1]: Detected architecture x86-64. Feb 8 23:39:59.015684 systemd[1]: Running in initrd. Feb 8 23:39:59.015695 systemd[1]: No hostname configured, using default hostname. Feb 8 23:39:59.015702 systemd[1]: Hostname set to . Feb 8 23:39:59.015713 systemd[1]: Initializing machine ID from random generator. Feb 8 23:39:59.015721 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:39:59.015729 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:39:59.015738 systemd[1]: Reached target cryptsetup.target. Feb 8 23:39:59.015747 systemd[1]: Reached target paths.target. Feb 8 23:39:59.015756 systemd[1]: Reached target slices.target. Feb 8 23:39:59.015768 systemd[1]: Reached target swap.target. Feb 8 23:39:59.015775 systemd[1]: Reached target timers.target. Feb 8 23:39:59.015784 systemd[1]: Listening on iscsid.socket. Feb 8 23:39:59.015794 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:39:59.015803 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:39:59.015813 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:39:59.015821 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:39:59.015831 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:39:59.015841 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:39:59.015850 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:39:59.015860 systemd[1]: Reached target sockets.target. Feb 8 23:39:59.015867 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:39:59.015875 systemd[1]: Finished network-cleanup.service. Feb 8 23:39:59.015886 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:39:59.015894 systemd[1]: Starting systemd-journald.service... Feb 8 23:39:59.015904 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:39:59.015914 systemd[1]: Starting systemd-resolved.service... Feb 8 23:39:59.015926 systemd-journald[183]: Journal started Feb 8 23:39:59.015968 systemd-journald[183]: Runtime Journal (/run/log/journal/36c1ef8237044056abe0b50cd578e458) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:39:59.024890 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:39:59.040268 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:39:59.053124 systemd[1]: Started systemd-journald.service. Feb 8 23:39:59.065269 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:39:59.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.077085 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:39:59.082084 kernel: audit: type=1130 audit(1707435599.065:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.082111 kernel: Bridge firewalling registered Feb 8 23:39:59.079688 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:39:59.087339 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:39:59.089840 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:39:59.122032 kernel: SCSI subsystem initialized Feb 8 23:39:59.122079 kernel: audit: type=1130 audit(1707435599.086:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.086000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.098453 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:39:59.137365 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:39:59.103284 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:39:59.140757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:39:59.146604 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:39:59.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.169464 kernel: audit: type=1130 audit(1707435599.089:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.169498 kernel: audit: type=1130 audit(1707435599.092:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.174475 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:39:59.174823 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:39:59.180393 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:39:59.184478 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:39:59.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.187557 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:39:59.209741 kernel: audit: type=1130 audit(1707435599.145:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.209772 kernel: audit: type=1130 audit(1707435599.177:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.177000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.192689 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:39:59.192699 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:39:59.246824 kernel: audit: type=1130 audit(1707435599.214:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.246855 kernel: audit: type=1130 audit(1707435599.229:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.214000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.229000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.246985 dracut-cmdline[200]: dracut-dracut-053 Feb 8 23:39:59.246985 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:39:59.192733 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:39:59.197749 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:39:59.214967 systemd[1]: Started systemd-resolved.service. Feb 8 23:39:59.229952 systemd[1]: Reached target nss-lookup.target. Feb 8 23:39:59.246614 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:39:59.297266 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:39:59.300054 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:39:59.301000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.314266 kernel: audit: type=1130 audit(1707435599.301:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.326269 kernel: iscsi: registered transport (tcp) Feb 8 23:39:59.352318 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:39:59.352388 kernel: QLogic iSCSI HBA Driver Feb 8 23:39:59.381452 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:39:59.384000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.385159 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:39:59.436272 kernel: raid6: avx512x4 gen() 18829 MB/s Feb 8 23:39:59.456262 kernel: raid6: avx512x4 xor() 7413 MB/s Feb 8 23:39:59.476257 kernel: raid6: avx512x2 gen() 18590 MB/s Feb 8 23:39:59.497259 kernel: raid6: avx512x2 xor() 29761 MB/s Feb 8 23:39:59.517255 kernel: raid6: avx512x1 gen() 18465 MB/s Feb 8 23:39:59.537255 kernel: raid6: avx512x1 xor() 26843 MB/s Feb 8 23:39:59.558259 kernel: raid6: avx2x4 gen() 18596 MB/s Feb 8 23:39:59.578253 kernel: raid6: avx2x4 xor() 6871 MB/s Feb 8 23:39:59.598254 kernel: raid6: avx2x2 gen() 18546 MB/s Feb 8 23:39:59.619255 kernel: raid6: avx2x2 xor() 22281 MB/s Feb 8 23:39:59.639253 kernel: raid6: avx2x1 gen() 13904 MB/s Feb 8 23:39:59.659255 kernel: raid6: avx2x1 xor() 19468 MB/s Feb 8 23:39:59.680261 kernel: raid6: sse2x4 gen() 11739 MB/s Feb 8 23:39:59.700259 kernel: raid6: sse2x4 xor() 5982 MB/s Feb 8 23:39:59.720256 kernel: raid6: sse2x2 gen() 13001 MB/s Feb 8 23:39:59.740260 kernel: raid6: sse2x2 xor() 7469 MB/s Feb 8 23:39:59.760258 kernel: raid6: sse2x1 gen() 11667 MB/s Feb 8 23:39:59.784049 kernel: raid6: sse2x1 xor() 5923 MB/s Feb 8 23:39:59.784081 kernel: raid6: using algorithm avx512x4 gen() 18829 MB/s Feb 8 23:39:59.784092 kernel: raid6: .... xor() 7413 MB/s, rmw enabled Feb 8 23:39:59.791030 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:39:59.807271 kernel: xor: automatically using best checksumming function avx Feb 8 23:39:59.904277 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:39:59.912216 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:39:59.916000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.916000 audit: BPF prog-id=7 op=LOAD Feb 8 23:39:59.916000 audit: BPF prog-id=8 op=LOAD Feb 8 23:39:59.917718 systemd[1]: Starting systemd-udevd.service... Feb 8 23:39:59.932297 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:39:59.936943 systemd[1]: Started systemd-udevd.service. Feb 8 23:39:59.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.946049 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:39:59.962227 dracut-pre-trigger[405]: rd.md=0: removing MD RAID activation Feb 8 23:39:59.992778 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:39:59.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:39:59.999305 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:40:00.034172 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:40:00.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:00.075266 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:40:00.105268 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:40:00.114267 kernel: AES CTR mode by8 optimization enabled Feb 8 23:40:00.114317 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:40:00.132269 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:40:00.149269 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:40:00.149327 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:40:00.159277 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:40:00.170480 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:40:00.170539 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:40:00.177843 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:40:00.187266 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:40:00.191275 kernel: scsi host0: storvsc_host_t Feb 8 23:40:00.191489 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:40:00.199073 kernel: scsi host1: storvsc_host_t Feb 8 23:40:00.199124 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:40:00.238688 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:40:00.238928 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:40:00.239052 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:40:00.246749 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:40:00.246984 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:40:00.253264 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:40:00.257264 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:40:00.264608 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:40:00.264796 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:40:00.266263 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:40:00.410388 kernel: hv_netvsc 0022489b-226d-0022-489b-226d0022489b eth0: VF slot 1 added Feb 8 23:40:00.419313 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:40:00.426264 kernel: hv_pci 1aafda35-fa61-4fb8-b5d5-520554c937e4: PCI VMBus probing: Using version 0x10004 Feb 8 23:40:00.438232 kernel: hv_pci 1aafda35-fa61-4fb8-b5d5-520554c937e4: PCI host bridge to bus fa61:00 Feb 8 23:40:00.438430 kernel: pci_bus fa61:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:40:00.438570 kernel: pci_bus fa61:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:40:00.448263 kernel: pci fa61:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:40:00.458967 kernel: pci fa61:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:40:00.476346 kernel: pci fa61:00:02.0: enabling Extended Tags Feb 8 23:40:00.491261 kernel: pci fa61:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at fa61:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:40:00.491493 kernel: pci_bus fa61:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:40:00.500085 kernel: pci fa61:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:40:00.595275 kernel: mlx5_core fa61:00:02.0: firmware version: 14.30.1224 Feb 8 23:40:00.712460 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:40:00.754269 kernel: mlx5_core fa61:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:40:00.798271 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (442) Feb 8 23:40:00.812570 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:40:00.902997 kernel: mlx5_core fa61:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:40:00.903374 kernel: mlx5_core fa61:00:02.0: mlx5e_tc_post_act_init:40:(pid 191): firmware level support is missing Feb 8 23:40:00.916384 kernel: hv_netvsc 0022489b-226d-0022-489b-226d0022489b eth0: VF registering: eth1 Feb 8 23:40:00.916577 kernel: mlx5_core fa61:00:02.0 eth1: joined to eth0 Feb 8 23:40:00.929276 kernel: mlx5_core fa61:00:02.0 enP64097s1: renamed from eth1 Feb 8 23:40:00.966126 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:40:00.973510 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:40:00.979929 systemd[1]: Starting disk-uuid.service... Feb 8 23:40:01.000826 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:40:02.002232 disk-uuid[562]: The operation has completed successfully. Feb 8 23:40:02.005098 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:40:02.074382 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:40:02.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.076000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.074498 systemd[1]: Finished disk-uuid.service. Feb 8 23:40:02.082330 systemd[1]: Starting verity-setup.service... Feb 8 23:40:02.123268 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:40:02.435959 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:40:02.442138 systemd[1]: Finished verity-setup.service. Feb 8 23:40:02.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.447213 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:40:02.520263 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:40:02.520529 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:40:02.524482 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:40:02.528719 systemd[1]: Starting ignition-setup.service... Feb 8 23:40:02.533652 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:40:02.553432 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:02.553474 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:40:02.553490 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:40:02.599588 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:40:02.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.605000 audit: BPF prog-id=9 op=LOAD Feb 8 23:40:02.606875 systemd[1]: Starting systemd-networkd.service... Feb 8 23:40:02.630739 systemd-networkd[826]: lo: Link UP Feb 8 23:40:02.630749 systemd-networkd[826]: lo: Gained carrier Feb 8 23:40:02.634763 systemd-networkd[826]: Enumeration completed Feb 8 23:40:02.639000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.634882 systemd[1]: Started systemd-networkd.service. Feb 8 23:40:02.639373 systemd[1]: Reached target network.target. Feb 8 23:40:02.639421 systemd-networkd[826]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:40:02.645574 systemd[1]: Starting iscsiuio.service... Feb 8 23:40:02.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.658745 systemd[1]: Started iscsiuio.service. Feb 8 23:40:02.661796 systemd[1]: Starting iscsid.service... Feb 8 23:40:02.667178 iscsid[834]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:40:02.667178 iscsid[834]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:40:02.667178 iscsid[834]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:40:02.667178 iscsid[834]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:40:02.667178 iscsid[834]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:40:02.667178 iscsid[834]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:40:02.667178 iscsid[834]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:40:02.717118 kernel: mlx5_core fa61:00:02.0 enP64097s1: Link up Feb 8 23:40:02.673000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.712000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.670318 systemd[1]: Started iscsid.service. Feb 8 23:40:02.682271 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:40:02.696611 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:40:02.707127 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:40:02.712813 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:40:02.714916 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:40:02.717116 systemd[1]: Reached target remote-fs.target. Feb 8 23:40:02.721273 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:40:02.741315 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:40:02.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.784984 kernel: hv_netvsc 0022489b-226d-0022-489b-226d0022489b eth0: Data path switched to VF: enP64097s1 Feb 8 23:40:02.785224 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:02.785349 systemd-networkd[826]: enP64097s1: Link UP Feb 8 23:40:02.785510 systemd-networkd[826]: eth0: Link UP Feb 8 23:40:02.785790 systemd-networkd[826]: eth0: Gained carrier Feb 8 23:40:02.792428 systemd-networkd[826]: enP64097s1: Gained carrier Feb 8 23:40:02.810438 systemd-networkd[826]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:40:02.871797 systemd[1]: Finished ignition-setup.service. Feb 8 23:40:02.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:02.875114 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:40:04.055466 systemd-networkd[826]: eth0: Gained IPv6LL Feb 8 23:40:05.908495 ignition[853]: Ignition 2.14.0 Feb 8 23:40:05.908512 ignition[853]: Stage: fetch-offline Feb 8 23:40:05.908605 ignition[853]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:05.908655 ignition[853]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:06.051203 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:06.051435 ignition[853]: parsed url from cmdline: "" Feb 8 23:40:06.051439 ignition[853]: no config URL provided Feb 8 23:40:06.051445 ignition[853]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:40:06.051454 ignition[853]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:40:06.051460 ignition[853]: failed to fetch config: resource requires networking Feb 8 23:40:06.055380 ignition[853]: Ignition finished successfully Feb 8 23:40:06.067024 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:40:06.069000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.070755 systemd[1]: Starting ignition-fetch.service... Feb 8 23:40:06.092785 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:40:06.092820 kernel: audit: type=1130 audit(1707435606.069:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.097774 ignition[859]: Ignition 2.14.0 Feb 8 23:40:06.097784 ignition[859]: Stage: fetch Feb 8 23:40:06.097919 ignition[859]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:06.097952 ignition[859]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:06.108079 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:06.111004 ignition[859]: parsed url from cmdline: "" Feb 8 23:40:06.111011 ignition[859]: no config URL provided Feb 8 23:40:06.111020 ignition[859]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:40:06.111044 ignition[859]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:40:06.111084 ignition[859]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:40:06.198338 ignition[859]: GET result: OK Feb 8 23:40:06.198364 ignition[859]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:40:06.333959 ignition[859]: opening config device: "/dev/sr0" Feb 8 23:40:06.334446 ignition[859]: getting drive status for "/dev/sr0" Feb 8 23:40:06.334524 ignition[859]: drive status: OK Feb 8 23:40:06.334566 ignition[859]: mounting config device Feb 8 23:40:06.334604 ignition[859]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure2485439007" Feb 8 23:40:06.363035 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:40:06.362140 ignition[859]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure2485439007" Feb 8 23:40:06.362151 ignition[859]: checking for config drive Feb 8 23:40:06.363747 systemd[1]: tmp-ignition\x2dazure2485439007.mount: Deactivated successfully. Feb 8 23:40:06.362589 ignition[859]: reading config Feb 8 23:40:06.362998 ignition[859]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure2485439007" Feb 8 23:40:06.363094 ignition[859]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure2485439007" Feb 8 23:40:06.363111 ignition[859]: config has been read from custom data Feb 8 23:40:06.363193 ignition[859]: parsing config with SHA512: 30e958a75f2d1cafc01dbebbbef3abd799637bf5d006ebad5ef21733c401a6a33a57e014fe770e51d761eaecc1af654ade854dc7d198ef839545924ea0e5cef6 Feb 8 23:40:06.404731 unknown[859]: fetched base config from "system" Feb 8 23:40:06.404750 unknown[859]: fetched base config from "system" Feb 8 23:40:06.404758 unknown[859]: fetched user config from "azure" Feb 8 23:40:06.412178 ignition[859]: fetch: fetch complete Feb 8 23:40:06.412188 ignition[859]: fetch: fetch passed Feb 8 23:40:06.413822 ignition[859]: Ignition finished successfully Feb 8 23:40:06.418662 systemd[1]: Finished ignition-fetch.service. Feb 8 23:40:06.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.421761 systemd[1]: Starting ignition-kargs.service... Feb 8 23:40:06.437956 kernel: audit: type=1130 audit(1707435606.420:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.446434 ignition[867]: Ignition 2.14.0 Feb 8 23:40:06.446445 ignition[867]: Stage: kargs Feb 8 23:40:06.446592 ignition[867]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:06.446625 ignition[867]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:06.451416 ignition[867]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:06.453577 ignition[867]: kargs: kargs passed Feb 8 23:40:06.453636 ignition[867]: Ignition finished successfully Feb 8 23:40:06.473848 kernel: audit: type=1130 audit(1707435606.458:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.458504 systemd[1]: Finished ignition-kargs.service. Feb 8 23:40:06.480132 ignition[873]: Ignition 2.14.0 Feb 8 23:40:06.459542 systemd[1]: Starting ignition-disks.service... Feb 8 23:40:06.480137 ignition[873]: Stage: disks Feb 8 23:40:06.480255 ignition[873]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:06.480281 ignition[873]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:06.484143 ignition[873]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:06.503665 ignition[873]: disks: disks passed Feb 8 23:40:06.503735 ignition[873]: Ignition finished successfully Feb 8 23:40:06.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.504529 systemd[1]: Finished ignition-disks.service. Feb 8 23:40:06.523819 kernel: audit: type=1130 audit(1707435606.507:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.507918 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:40:06.523817 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:40:06.528078 systemd[1]: Reached target local-fs.target. Feb 8 23:40:06.532224 systemd[1]: Reached target sysinit.target. Feb 8 23:40:06.536443 systemd[1]: Reached target basic.target. Feb 8 23:40:06.542889 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:40:06.601375 systemd-fsck[881]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:40:06.606125 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:40:06.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.621311 kernel: audit: type=1130 audit(1707435606.608:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:06.621354 systemd[1]: Mounting sysroot.mount... Feb 8 23:40:06.654293 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:40:06.654341 systemd[1]: Mounted sysroot.mount. Feb 8 23:40:06.658373 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:40:06.695769 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:40:06.702067 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:40:06.707592 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:40:06.707707 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:40:06.717222 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:40:06.765057 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:40:06.768357 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:40:06.788390 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (891) Feb 8 23:40:06.791263 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:40:06.802200 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:06.802229 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:40:06.802264 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:40:06.807761 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:40:06.824274 initrd-setup-root[922]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:40:06.831198 initrd-setup-root[930]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:40:06.857970 initrd-setup-root[938]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:40:07.288957 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:40:07.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.304824 systemd[1]: Starting ignition-mount.service... Feb 8 23:40:07.308958 kernel: audit: type=1130 audit(1707435607.291:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.309557 systemd[1]: Starting sysroot-boot.service... Feb 8 23:40:07.339000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.337312 systemd[1]: Finished sysroot-boot.service. Feb 8 23:40:07.355552 kernel: audit: type=1130 audit(1707435607.339:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.355590 ignition[960]: INFO : Ignition 2.14.0 Feb 8 23:40:07.355590 ignition[960]: INFO : Stage: mount Feb 8 23:40:07.355590 ignition[960]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:07.355590 ignition[960]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:07.355590 ignition[960]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:07.383026 kernel: audit: type=1130 audit(1707435607.365:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:07.383089 ignition[960]: INFO : mount: mount passed Feb 8 23:40:07.383089 ignition[960]: INFO : Ignition finished successfully Feb 8 23:40:07.360873 systemd[1]: Finished ignition-mount.service. Feb 8 23:40:07.366428 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:40:07.366495 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:40:08.107638 coreos-metadata[890]: Feb 08 23:40:08.107 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:40:08.127562 coreos-metadata[890]: Feb 08 23:40:08.127 INFO Fetch successful Feb 8 23:40:08.161084 coreos-metadata[890]: Feb 08 23:40:08.160 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:40:08.171362 coreos-metadata[890]: Feb 08 23:40:08.171 INFO Fetch successful Feb 8 23:40:08.189650 coreos-metadata[890]: Feb 08 23:40:08.189 INFO wrote hostname ci-3510.3.2-a-65dd02f9dc to /sysroot/etc/hostname Feb 8 23:40:08.196000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:08.191730 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:40:08.213946 kernel: audit: type=1130 audit(1707435608.196:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:08.198140 systemd[1]: Starting ignition-files.service... Feb 8 23:40:08.217230 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:40:08.234265 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (969) Feb 8 23:40:08.234294 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:40:08.242257 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:40:08.242281 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:40:08.251698 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:40:08.265018 ignition[988]: INFO : Ignition 2.14.0 Feb 8 23:40:08.265018 ignition[988]: INFO : Stage: files Feb 8 23:40:08.269288 ignition[988]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:08.269288 ignition[988]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:08.269288 ignition[988]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:08.287290 ignition[988]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:40:08.290697 ignition[988]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:40:08.290697 ignition[988]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:40:08.352602 ignition[988]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:40:08.357428 ignition[988]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:40:08.367853 unknown[988]: wrote ssh authorized keys file for user: core Feb 8 23:40:08.371024 ignition[988]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:40:08.382344 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:40:08.387532 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:40:09.039562 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:40:09.167170 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 8 23:40:09.176477 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 8 23:40:09.176477 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:40:09.176477 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:40:09.508420 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:40:09.653597 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:40:09.659048 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:40:09.659048 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 8 23:40:10.170060 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:40:10.317573 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 8 23:40:10.327374 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 8 23:40:10.333872 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:40:10.338202 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:40:10.733545 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:40:10.994300 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 97840854134909d75a1a2563628cc4ba632067369ce7fc8a8a1e90a387d32dd7bfd73f4f5b5a82ef842088e7470692951eb7fc869c5f297dd740f855672ee628 Feb 8 23:40:11.002222 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:40:11.002222 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:40:11.002222 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:40:11.122611 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:40:11.309360 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 8 23:40:11.318315 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:40:11.318315 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:40:11.318315 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:40:11.438199 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:40:12.058438 ignition[988]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:40:12.066656 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:40:13.094550 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:40:13.100284 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:40:13.100284 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:40:13.100284 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:40:13.119937 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (990) Feb 8 23:40:13.117545 systemd[1]: mnt-oem1646739044.mount: Deactivated successfully. Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1646739044" Feb 8 23:40:13.122731 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1646739044": device or resource busy Feb 8 23:40:13.122731 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1646739044", trying btrfs: device or resource busy Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1646739044" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1646739044" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1646739044" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1646739044" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:40:13.122731 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:40:13.209108 kernel: audit: type=1130 audit(1707435613.147:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.209151 kernel: audit: type=1130 audit(1707435613.187:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.187000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.134206 systemd[1]: mnt-oem3533262659.mount: Deactivated successfully. Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(14): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3533262659" Feb 8 23:40:13.213810 ignition[988]: CRITICAL : files: createFilesystemsFiles: createFiles: op(13): op(14): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3533262659": device or resource busy Feb 8 23:40:13.213810 ignition[988]: ERROR : files: createFilesystemsFiles: createFiles: op(13): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3533262659", trying btrfs: device or resource busy Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3533262659" Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(15): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3533262659" Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [started] unmounting "/mnt/oem3533262659" Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): op(16): [finished] unmounting "/mnt/oem3533262659" Feb 8 23:40:13.213810 ignition[988]: INFO : files: createFilesystemsFiles: createFiles: op(13): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(17): [started] processing unit "waagent.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(17): [finished] processing unit "waagent.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(19): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(19): op(1a): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(19): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:40:13.213810 ignition[988]: INFO : files: op(1b): [started] processing unit "prepare-critools.service" Feb 8 23:40:13.344816 kernel: audit: type=1130 audit(1707435613.258:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.344862 kernel: audit: type=1131 audit(1707435613.258:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.344880 kernel: audit: type=1130 audit(1707435613.305:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.344896 kernel: audit: type=1131 audit(1707435613.305:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.141159 systemd[1]: Finished ignition-files.service. Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1b): op(1c): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1b): op(1c): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1b): [finished] processing unit "prepare-critools.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1d): [started] processing unit "prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1d): op(1e): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1d): op(1e): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1d): [finished] processing unit "prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1f): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(1f): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(21): [started] setting preset to enabled for "waagent.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(21): [finished] setting preset to enabled for "waagent.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(22): [started] setting preset to enabled for "nvidia.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(22): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(23): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:40:13.347573 ignition[988]: INFO : files: createResultFile: createFiles: op(24): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:40:13.347573 ignition[988]: INFO : files: createResultFile: createFiles: op(24): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:40:13.347573 ignition[988]: INFO : files: files passed Feb 8 23:40:13.347573 ignition[988]: INFO : Ignition finished successfully Feb 8 23:40:13.499768 kernel: audit: type=1130 audit(1707435613.359:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.499806 kernel: audit: type=1131 audit(1707435613.408:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.499825 kernel: audit: type=1131 audit(1707435613.487:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.408000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.148956 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:40:13.505444 initrd-setup-root-after-ignition[1014]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:40:13.171975 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:40:13.527752 kernel: audit: type=1131 audit(1707435613.510:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.173586 systemd[1]: Starting ignition-quench.service... Feb 8 23:40:13.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.178076 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:40:13.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.187692 systemd[1]: Reached target ignition-complete.target. Feb 8 23:40:13.241329 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:40:13.252492 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:40:13.253036 systemd[1]: Finished ignition-quench.service. Feb 8 23:40:13.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.566000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.569313 ignition[1027]: INFO : Ignition 2.14.0 Feb 8 23:40:13.569313 ignition[1027]: INFO : Stage: umount Feb 8 23:40:13.569313 ignition[1027]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:40:13.569313 ignition[1027]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:40:13.569313 ignition[1027]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:40:13.569313 ignition[1027]: INFO : umount: umount passed Feb 8 23:40:13.569313 ignition[1027]: INFO : Ignition finished successfully Feb 8 23:40:13.573000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.573000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.577000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.583000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.591000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.596071 iscsid[834]: iscsid shutting down. Feb 8 23:40:13.297650 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:40:13.297744 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:40:13.305424 systemd[1]: Reached target initrd-fs.target. Feb 8 23:40:13.334073 systemd[1]: Reached target initrd.target. Feb 8 23:40:13.339504 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:40:13.340463 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:40:13.355331 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:40:13.373303 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:40:13.387949 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:40:13.395289 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:40:13.397851 systemd[1]: Stopped target timers.target. Feb 8 23:40:13.403018 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:40:13.403154 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:40:13.420656 systemd[1]: Stopped target initrd.target. Feb 8 23:40:13.424157 systemd[1]: Stopped target basic.target. Feb 8 23:40:13.429844 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:40:13.435764 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:40:13.441586 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:40:13.447651 systemd[1]: Stopped target remote-fs.target. Feb 8 23:40:13.453423 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:40:13.459122 systemd[1]: Stopped target sysinit.target. Feb 8 23:40:13.464934 systemd[1]: Stopped target local-fs.target. Feb 8 23:40:13.470912 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:40:13.477702 systemd[1]: Stopped target swap.target. Feb 8 23:40:13.483822 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:40:13.483971 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:40:13.500332 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:40:13.505377 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:40:13.505548 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:40:13.522690 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:40:13.522838 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:40:13.527775 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:40:13.527905 systemd[1]: Stopped ignition-files.service. Feb 8 23:40:13.530045 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:40:13.530159 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:40:13.537285 systemd[1]: Stopping ignition-mount.service... Feb 8 23:40:13.547841 systemd[1]: Stopping iscsid.service... Feb 8 23:40:13.549726 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:40:13.549878 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:40:13.554150 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:40:13.556111 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:40:13.556316 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:40:13.558956 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:40:13.559107 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:40:13.563866 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:40:13.563971 systemd[1]: Stopped iscsid.service. Feb 8 23:40:13.571503 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:40:13.571584 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:40:13.574121 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:40:13.574186 systemd[1]: Stopped ignition-mount.service. Feb 8 23:40:13.578986 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:40:13.579030 systemd[1]: Stopped ignition-disks.service. Feb 8 23:40:13.583665 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:40:13.583709 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:40:13.592025 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:40:13.592064 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:40:13.713000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.713465 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:40:13.713536 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:40:13.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.720258 systemd[1]: Stopped target paths.target. Feb 8 23:40:13.724471 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:40:13.727307 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:40:13.730098 systemd[1]: Stopped target slices.target. Feb 8 23:40:13.732190 systemd[1]: Stopped target sockets.target. Feb 8 23:40:13.740240 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:40:13.740302 systemd[1]: Closed iscsid.socket. Feb 8 23:40:13.745716 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:40:13.745783 systemd[1]: Stopped ignition-setup.service. Feb 8 23:40:13.751000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.751919 systemd[1]: Stopping iscsiuio.service... Feb 8 23:40:13.755000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.755056 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:40:13.762000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.755585 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:40:13.755672 systemd[1]: Stopped iscsiuio.service. Feb 8 23:40:13.769000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.758242 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:40:13.758375 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:40:13.762874 systemd[1]: Stopped target network.target. Feb 8 23:40:13.767240 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:40:13.767284 systemd[1]: Closed iscsiuio.socket. Feb 8 23:40:13.787000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.769005 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:40:13.794000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.769047 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:40:13.771372 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:40:13.796000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:40:13.775013 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:40:13.781294 systemd-networkd[826]: eth0: DHCPv6 lease lost Feb 8 23:40:13.804000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:40:13.809000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.783346 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:40:13.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.783587 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:40:13.816000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.790157 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:40:13.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.790267 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:40:13.797110 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:40:13.797147 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:40:13.801786 systemd[1]: Stopping network-cleanup.service... Feb 8 23:40:13.805050 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:40:13.840000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.845000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.805117 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:40:13.847000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.810013 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:40:13.854000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.857000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.810065 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:40:13.814865 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:40:13.814911 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:40:13.817211 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:40:13.820946 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:40:13.821070 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:40:13.828302 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:40:13.828359 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:40:13.834327 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:40:13.834431 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:40:13.838819 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:40:13.838869 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:40:13.840950 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:40:13.840991 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:40:13.845685 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:40:13.845733 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:40:13.848518 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:40:13.852216 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:40:13.852290 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:40:13.912157 kernel: hv_netvsc 0022489b-226d-0022-489b-226d0022489b eth0: Data path switched from VF: enP64097s1 Feb 8 23:40:13.856619 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:40:13.856707 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:40:13.933000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:13.931191 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:40:13.931303 systemd[1]: Stopped network-cleanup.service. Feb 8 23:40:13.933917 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:40:13.943595 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:40:13.958050 systemd[1]: Switching root. Feb 8 23:40:13.984415 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:40:13.984477 systemd-journald[183]: Journal stopped Feb 8 23:40:29.030082 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:40:29.030108 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:40:29.030121 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:40:29.030129 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:40:29.030139 kernel: SELinux: policy capability open_perms=1 Feb 8 23:40:29.030149 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:40:29.030159 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:40:29.030171 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:40:29.030180 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:40:29.030190 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:40:29.030199 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:40:29.030209 systemd[1]: Successfully loaded SELinux policy in 376.481ms. Feb 8 23:40:29.030221 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 37.365ms. Feb 8 23:40:29.030234 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:40:29.030257 systemd[1]: Detected virtualization microsoft. Feb 8 23:40:29.030268 systemd[1]: Detected architecture x86-64. Feb 8 23:40:29.030278 systemd[1]: Detected first boot. Feb 8 23:40:29.030288 systemd[1]: Hostname set to . Feb 8 23:40:29.030299 systemd[1]: Initializing machine ID from random generator. Feb 8 23:40:29.030313 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:40:29.030322 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 8 23:40:29.030334 kernel: audit: type=1400 audit(1707435618.833:89): avc: denied { associate } for pid=1060 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:40:29.030346 kernel: audit: type=1300 audit(1707435618.833:89): arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:29.030357 kernel: audit: type=1327 audit(1707435618.833:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:29.030369 kernel: audit: type=1400 audit(1707435618.840:90): avc: denied { associate } for pid=1060 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:40:29.030381 kernel: audit: type=1300 audit(1707435618.840:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:29.030393 kernel: audit: type=1307 audit(1707435618.840:90): cwd="/" Feb 8 23:40:29.030402 kernel: audit: type=1302 audit(1707435618.840:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:29.030414 kernel: audit: type=1302 audit(1707435618.840:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:29.030423 kernel: audit: type=1327 audit(1707435618.840:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:29.030437 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:40:29.030876 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:40:29.030892 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:40:29.030903 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:40:29.030912 kernel: audit: type=1334 audit(1707435628.497:91): prog-id=12 op=LOAD Feb 8 23:40:29.030921 kernel: audit: type=1334 audit(1707435628.497:92): prog-id=3 op=UNLOAD Feb 8 23:40:29.030929 kernel: audit: type=1334 audit(1707435628.508:93): prog-id=13 op=LOAD Feb 8 23:40:29.030937 kernel: audit: type=1334 audit(1707435628.513:94): prog-id=14 op=LOAD Feb 8 23:40:29.030949 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:40:29.030958 kernel: audit: type=1334 audit(1707435628.513:95): prog-id=4 op=UNLOAD Feb 8 23:40:29.030970 kernel: audit: type=1334 audit(1707435628.513:96): prog-id=5 op=UNLOAD Feb 8 23:40:29.030982 kernel: audit: type=1131 audit(1707435628.513:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.030992 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:40:29.031005 kernel: audit: type=1334 audit(1707435628.552:98): prog-id=12 op=UNLOAD Feb 8 23:40:29.031014 kernel: audit: type=1130 audit(1707435628.558:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.031026 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:40:29.031037 kernel: audit: type=1131 audit(1707435628.558:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.031046 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:40:29.031055 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:40:29.031068 systemd[1]: Created slice system-getty.slice. Feb 8 23:40:29.031077 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:40:29.031090 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:40:29.031100 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:40:29.031111 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:40:29.031124 systemd[1]: Created slice user.slice. Feb 8 23:40:29.031134 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:40:29.031146 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:40:29.031156 systemd[1]: Set up automount boot.automount. Feb 8 23:40:29.031167 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:40:29.031177 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:40:29.031190 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:40:29.031199 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:40:29.031213 systemd[1]: Reached target integritysetup.target. Feb 8 23:40:29.031224 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:40:29.031235 systemd[1]: Reached target remote-fs.target. Feb 8 23:40:29.031255 systemd[1]: Reached target slices.target. Feb 8 23:40:29.031267 systemd[1]: Reached target swap.target. Feb 8 23:40:29.031278 systemd[1]: Reached target torcx.target. Feb 8 23:40:29.031290 systemd[1]: Reached target veritysetup.target. Feb 8 23:40:29.031301 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:40:29.031316 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:40:29.031325 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:40:29.031338 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:40:29.031351 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:40:29.031361 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:40:29.031376 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:40:29.031388 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:40:29.031398 systemd[1]: Mounting media.mount... Feb 8 23:40:29.031410 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:40:29.031420 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:40:29.031433 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:40:29.031443 systemd[1]: Mounting tmp.mount... Feb 8 23:40:29.031455 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:40:29.031467 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:40:29.031480 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:40:29.031492 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:40:29.031503 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:40:29.031514 systemd[1]: Starting modprobe@drm.service... Feb 8 23:40:29.031526 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:40:29.031538 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:40:29.031551 systemd[1]: Starting modprobe@loop.service... Feb 8 23:40:29.031561 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:40:29.031575 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:40:29.031586 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:40:29.031597 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:40:29.031608 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:40:29.031619 systemd[1]: Stopped systemd-journald.service. Feb 8 23:40:29.031632 systemd[1]: Starting systemd-journald.service... Feb 8 23:40:29.031641 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:40:29.031654 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:40:29.031665 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:40:29.031678 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:40:29.031690 kernel: loop: module loaded Feb 8 23:40:29.031699 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:40:29.031712 systemd[1]: Stopped verity-setup.service. Feb 8 23:40:29.031722 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:40:29.031734 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:40:29.031746 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:40:29.031757 systemd[1]: Mounted media.mount. Feb 8 23:40:29.031775 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:40:29.031796 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:40:29.031814 systemd[1]: Mounted tmp.mount. Feb 8 23:40:29.031835 kernel: fuse: init (API version 7.34) Feb 8 23:40:29.031855 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:40:29.031874 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:40:29.031894 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:40:29.031917 systemd-journald[1159]: Journal started Feb 8 23:40:29.031997 systemd-journald[1159]: Runtime Journal (/run/log/journal/a9a0da1f5e9646d3ba2bdf249d58fefd) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:40:16.560000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:40:17.400000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:40:17.417000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:40:17.417000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:40:17.417000 audit: BPF prog-id=10 op=LOAD Feb 8 23:40:17.417000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:40:17.417000 audit: BPF prog-id=11 op=LOAD Feb 8 23:40:17.417000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:40:18.833000 audit[1060]: AVC avc: denied { associate } for pid=1060 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:40:18.833000 audit[1060]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001078d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:18.833000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:18.840000 audit[1060]: AVC avc: denied { associate } for pid=1060 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:40:18.840000 audit[1060]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001079a9 a2=1ed a3=0 items=2 ppid=1043 pid=1060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:18.840000 audit: CWD cwd="/" Feb 8 23:40:18.840000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:18.840000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:18.840000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:40:28.497000 audit: BPF prog-id=12 op=LOAD Feb 8 23:40:28.497000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:40:28.508000 audit: BPF prog-id=13 op=LOAD Feb 8 23:40:28.513000 audit: BPF prog-id=14 op=LOAD Feb 8 23:40:28.513000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:40:28.513000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:40:28.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.552000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:40:28.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.558000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.892000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.903000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.909000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:28.910000 audit: BPF prog-id=15 op=LOAD Feb 8 23:40:28.910000 audit: BPF prog-id=16 op=LOAD Feb 8 23:40:28.910000 audit: BPF prog-id=17 op=LOAD Feb 8 23:40:28.911000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:40:28.911000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:40:28.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.027000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:40:29.027000 audit[1159]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffd57ff1b40 a2=4000 a3=7ffd57ff1bdc items=0 ppid=1 pid=1159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:29.027000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:40:29.027000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:18.785143 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:40:28.495611 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:40:18.800165 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:40:28.514706 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:40:18.800191 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:40:18.800235 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:40:18.800271 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:40:18.800331 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:40:18.800350 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:40:18.800633 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:40:18.800686 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:40:18.800702 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:40:18.817345 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:40:18.817403 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:40:18.817435 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:40:18.817453 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:40:18.817473 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:40:18.817489 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:18Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:40:27.235662 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:27.235911 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:27.236034 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:27.236726 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:40:27.236809 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:40:27.236869 /usr/lib/systemd/system-generators/torcx-generator[1060]: time="2024-02-08T23:40:27Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:40:29.036269 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:40:29.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.044116 systemd[1]: Started systemd-journald.service. Feb 8 23:40:29.044000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.045272 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:40:29.045420 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:40:29.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.047875 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:40:29.048045 systemd[1]: Finished modprobe@drm.service. Feb 8 23:40:29.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.050539 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:40:29.050685 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:40:29.052000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.053168 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:40:29.053363 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:40:29.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.055648 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:40:29.055790 systemd[1]: Finished modprobe@loop.service. Feb 8 23:40:29.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.060586 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:40:29.062000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.063324 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:40:29.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.065916 systemd[1]: Reached target network-pre.target. Feb 8 23:40:29.069292 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:40:29.073461 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:40:29.075904 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:40:29.078495 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:40:29.082022 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:40:29.084123 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:40:29.085405 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:40:29.087603 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:40:29.088860 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:40:29.095523 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:40:29.097000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.098241 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:40:29.100728 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:40:29.104589 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:40:29.141001 systemd-journald[1159]: Time spent on flushing to /var/log/journal/a9a0da1f5e9646d3ba2bdf249d58fefd is 27.631ms for 1198 entries. Feb 8 23:40:29.141001 systemd-journald[1159]: System Journal (/var/log/journal/a9a0da1f5e9646d3ba2bdf249d58fefd) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:40:29.220465 systemd-journald[1159]: Received client request to flush runtime journal. Feb 8 23:40:29.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.198000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.152332 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:40:29.220899 udevadm[1183]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:40:29.154843 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:40:29.166979 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:40:29.170513 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:40:29.195437 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:40:29.221450 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:40:29.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:29.699517 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:40:29.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:30.553969 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:40:30.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:30.557000 audit: BPF prog-id=18 op=LOAD Feb 8 23:40:30.557000 audit: BPF prog-id=19 op=LOAD Feb 8 23:40:30.557000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:40:30.557000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:40:30.558146 systemd[1]: Starting systemd-udevd.service... Feb 8 23:40:30.576020 systemd-udevd[1186]: Using default interface naming scheme 'v252'. Feb 8 23:40:30.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:30.967000 audit: BPF prog-id=20 op=LOAD Feb 8 23:40:30.962261 systemd[1]: Started systemd-udevd.service. Feb 8 23:40:30.968431 systemd[1]: Starting systemd-networkd.service... Feb 8 23:40:31.003913 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:40:31.060280 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:40:31.071000 audit[1201]: AVC avc: denied { confidentiality } for pid=1201 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:40:31.087117 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:40:31.089053 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:40:31.089107 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:40:31.088000 audit: BPF prog-id=21 op=LOAD Feb 8 23:40:31.088000 audit: BPF prog-id=22 op=LOAD Feb 8 23:40:31.088000 audit: BPF prog-id=23 op=LOAD Feb 8 23:40:31.089943 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:40:31.098275 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:40:31.108027 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:40:31.108076 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:40:31.114035 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:40:31.121467 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:40:31.128295 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:40:31.071000 audit[1201]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55a3c5629540 a1=f884 a2=7f83fdb01bc5 a3=5 items=12 ppid=1186 pid=1201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:31.071000 audit: CWD cwd="/" Feb 8 23:40:31.071000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=1 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=2 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=3 name=(null) inode=15699 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=4 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=5 name=(null) inode=15700 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=6 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=7 name=(null) inode=15701 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=8 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=9 name=(null) inode=15702 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=10 name=(null) inode=15698 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PATH item=11 name=(null) inode=15703 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:40:31.071000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:40:31.154223 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:40:31.154306 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:40:31.154332 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:40:31.950284 systemd[1]: Started systemd-userdbd.service. Feb 8 23:40:31.952000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:32.152285 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:40:32.168195 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1194) Feb 8 23:40:32.220759 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:40:32.224288 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:40:32.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:32.229101 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:40:32.330693 systemd-networkd[1198]: lo: Link UP Feb 8 23:40:32.330704 systemd-networkd[1198]: lo: Gained carrier Feb 8 23:40:32.331389 systemd-networkd[1198]: Enumeration completed Feb 8 23:40:32.331526 systemd[1]: Started systemd-networkd.service. Feb 8 23:40:32.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:32.335986 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:40:32.451086 systemd-networkd[1198]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:40:32.505201 kernel: mlx5_core fa61:00:02.0 enP64097s1: Link up Feb 8 23:40:32.545209 kernel: hv_netvsc 0022489b-226d-0022-489b-226d0022489b eth0: Data path switched to VF: enP64097s1 Feb 8 23:40:32.546126 systemd-networkd[1198]: enP64097s1: Link UP Feb 8 23:40:32.546292 systemd-networkd[1198]: eth0: Link UP Feb 8 23:40:32.546304 systemd-networkd[1198]: eth0: Gained carrier Feb 8 23:40:32.551499 systemd-networkd[1198]: enP64097s1: Gained carrier Feb 8 23:40:32.590340 systemd-networkd[1198]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:40:32.694413 lvm[1263]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:40:32.720335 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:40:32.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:32.723117 systemd[1]: Reached target cryptsetup.target. Feb 8 23:40:32.726494 systemd[1]: Starting lvm2-activation.service... Feb 8 23:40:32.732877 lvm[1265]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:40:32.759219 systemd[1]: Finished lvm2-activation.service. Feb 8 23:40:32.761000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:32.761885 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:40:32.764279 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:40:32.764312 systemd[1]: Reached target local-fs.target. Feb 8 23:40:32.766684 systemd[1]: Reached target machines.target. Feb 8 23:40:32.770196 systemd[1]: Starting ldconfig.service... Feb 8 23:40:32.772525 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:40:32.772633 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:40:32.773832 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:40:32.777019 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:40:32.780823 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:40:32.783630 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:40:32.783722 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:40:32.784802 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:40:32.852994 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1267 (bootctl) Feb 8 23:40:32.854791 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:40:33.395393 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:40:33.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:33.880053 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:40:34.005768 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:40:34.006459 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:40:34.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.027306 systemd-networkd[1198]: eth0: Gained IPv6LL Feb 8 23:40:34.032061 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:40:34.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.053307 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:40:34.109490 systemd-tmpfiles[1270]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:40:34.387778 systemd-fsck[1275]: fsck.fat 4.2 (2021-01-31) Feb 8 23:40:34.387778 systemd-fsck[1275]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:40:34.390505 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:40:34.398579 kernel: kauditd_printk_skb: 70 callbacks suppressed Feb 8 23:40:34.398663 kernel: audit: type=1130 audit(1707435634.393:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.393000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.395655 systemd[1]: Mounting boot.mount... Feb 8 23:40:34.419130 systemd[1]: Mounted boot.mount. Feb 8 23:40:34.435074 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:40:34.437000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:34.450236 kernel: audit: type=1130 audit(1707435634.437:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.056715 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:40:36.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.062154 systemd[1]: Starting audit-rules.service... Feb 8 23:40:36.072316 kernel: audit: type=1130 audit(1707435636.060:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.075856 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:40:36.083000 audit: BPF prog-id=24 op=LOAD Feb 8 23:40:36.090220 kernel: audit: type=1334 audit(1707435636.083:157): prog-id=24 op=LOAD Feb 8 23:40:36.079689 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:40:36.084653 systemd[1]: Starting systemd-resolved.service... Feb 8 23:40:36.091000 audit: BPF prog-id=25 op=LOAD Feb 8 23:40:36.092296 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:40:36.096219 kernel: audit: type=1334 audit(1707435636.091:158): prog-id=25 op=LOAD Feb 8 23:40:36.098688 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:40:36.160000 audit[1287]: SYSTEM_BOOT pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.162712 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:40:36.179288 kernel: audit: type=1127 audit(1707435636.160:159): pid=1287 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.179378 kernel: audit: type=1130 audit(1707435636.176:160): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.209088 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:40:36.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.212002 systemd[1]: Reached target time-set.target. Feb 8 23:40:36.226648 kernel: audit: type=1130 audit(1707435636.211:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.230000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.227937 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:40:36.230910 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:40:36.243198 kernel: audit: type=1130 audit(1707435636.230:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.377349 systemd-resolved[1284]: Positive Trust Anchors: Feb 8 23:40:36.377365 systemd-resolved[1284]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:40:36.377405 systemd-resolved[1284]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:40:36.402275 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:40:36.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.419336 kernel: audit: type=1130 audit(1707435636.405:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:40:36.420866 systemd-timesyncd[1285]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Feb 8 23:40:36.421389 systemd-timesyncd[1285]: Initial clock synchronization to Thu 2024-02-08 23:40:36.423979 UTC. Feb 8 23:40:36.491000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:40:36.491000 audit[1302]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fffec9c2910 a2=420 a3=0 items=0 ppid=1281 pid=1302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:40:36.491000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:40:36.491873 augenrules[1302]: No rules Feb 8 23:40:36.492452 systemd[1]: Finished audit-rules.service. Feb 8 23:40:36.521330 systemd-resolved[1284]: Using system hostname 'ci-3510.3.2-a-65dd02f9dc'. Feb 8 23:40:36.523549 systemd[1]: Started systemd-resolved.service. Feb 8 23:40:36.526606 systemd[1]: Reached target network.target. Feb 8 23:40:36.529047 systemd[1]: Reached target network-online.target. Feb 8 23:40:36.531638 systemd[1]: Reached target nss-lookup.target. Feb 8 23:40:41.824464 ldconfig[1266]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:40:41.840612 systemd[1]: Finished ldconfig.service. Feb 8 23:40:41.844693 systemd[1]: Starting systemd-update-done.service... Feb 8 23:40:41.872295 systemd[1]: Finished systemd-update-done.service. Feb 8 23:40:41.875691 systemd[1]: Reached target sysinit.target. Feb 8 23:40:41.878681 systemd[1]: Started motdgen.path. Feb 8 23:40:41.881507 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:40:41.884932 systemd[1]: Started logrotate.timer. Feb 8 23:40:41.886915 systemd[1]: Started mdadm.timer. Feb 8 23:40:41.888870 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:40:41.891292 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:40:41.891337 systemd[1]: Reached target paths.target. Feb 8 23:40:41.893563 systemd[1]: Reached target timers.target. Feb 8 23:40:41.896192 systemd[1]: Listening on dbus.socket. Feb 8 23:40:41.899288 systemd[1]: Starting docker.socket... Feb 8 23:40:41.904124 systemd[1]: Listening on sshd.socket. Feb 8 23:40:41.906439 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:40:41.906882 systemd[1]: Listening on docker.socket. Feb 8 23:40:41.908888 systemd[1]: Reached target sockets.target. Feb 8 23:40:41.911021 systemd[1]: Reached target basic.target. Feb 8 23:40:41.912981 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:40:41.913002 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:40:41.914047 systemd[1]: Starting containerd.service... Feb 8 23:40:41.917243 systemd[1]: Starting dbus.service... Feb 8 23:40:41.919804 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:40:41.923234 systemd[1]: Starting extend-filesystems.service... Feb 8 23:40:41.925351 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:40:41.926810 systemd[1]: Starting motdgen.service... Feb 8 23:40:41.930439 systemd[1]: Started nvidia.service. Feb 8 23:40:41.933755 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:40:41.937086 systemd[1]: Starting prepare-critools.service... Feb 8 23:40:41.939956 systemd[1]: Starting prepare-helm.service... Feb 8 23:40:41.943117 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:40:41.946423 systemd[1]: Starting sshd-keygen.service... Feb 8 23:40:41.951632 systemd[1]: Starting systemd-logind.service... Feb 8 23:40:41.953607 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:40:41.953691 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:40:41.954262 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:40:41.955070 systemd[1]: Starting update-engine.service... Feb 8 23:40:41.958287 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:40:41.965774 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:40:41.966032 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:40:42.001722 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:40:42.001954 systemd[1]: Finished motdgen.service. Feb 8 23:40:42.069205 jq[1329]: true Feb 8 23:40:42.069571 jq[1312]: false Feb 8 23:40:42.070095 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:40:42.070351 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda1 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda2 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda3 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found usr Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda4 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda6 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda7 Feb 8 23:40:42.073250 extend-filesystems[1313]: Found sda9 Feb 8 23:40:42.073250 extend-filesystems[1313]: Checking size of /dev/sda9 Feb 8 23:40:42.104262 jq[1342]: true Feb 8 23:40:42.120198 env[1338]: time="2024-02-08T23:40:42.119999169Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:40:42.142165 tar[1332]: ./ Feb 8 23:40:42.142165 tar[1332]: ./macvlan Feb 8 23:40:42.142969 tar[1333]: crictl Feb 8 23:40:42.146681 tar[1334]: linux-amd64/helm Feb 8 23:40:42.190161 extend-filesystems[1313]: Old size kept for /dev/sda9 Feb 8 23:40:42.209203 extend-filesystems[1313]: Found sr0 Feb 8 23:40:42.195121 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:40:42.195340 systemd[1]: Finished extend-filesystems.service. Feb 8 23:40:42.264516 systemd-logind[1326]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:40:42.265072 systemd-logind[1326]: New seat seat0. Feb 8 23:40:42.268877 env[1338]: time="2024-02-08T23:40:42.268829092Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:40:42.269150 env[1338]: time="2024-02-08T23:40:42.269128335Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.270726 env[1338]: time="2024-02-08T23:40:42.270685463Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:40:42.272143 env[1338]: time="2024-02-08T23:40:42.272113871Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.273547 env[1338]: time="2024-02-08T23:40:42.273514275Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:40:42.273655 env[1338]: time="2024-02-08T23:40:42.273636193Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.278355 env[1338]: time="2024-02-08T23:40:42.278243566Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:40:42.278562 env[1338]: time="2024-02-08T23:40:42.278542009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.279342 env[1338]: time="2024-02-08T23:40:42.279316222Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.280549 env[1338]: time="2024-02-08T23:40:42.280524199Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:40:42.282454 env[1338]: time="2024-02-08T23:40:42.282425676Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:40:42.282543 env[1338]: time="2024-02-08T23:40:42.282528691Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:40:42.282674 env[1338]: time="2024-02-08T23:40:42.282654709Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:40:42.282764 env[1338]: time="2024-02-08T23:40:42.282749023Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:40:42.299217 env[1338]: time="2024-02-08T23:40:42.299150117Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:40:42.299363 env[1338]: time="2024-02-08T23:40:42.299343545Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:40:42.299448 env[1338]: time="2024-02-08T23:40:42.299433458Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:40:42.299570 env[1338]: time="2024-02-08T23:40:42.299555976Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.299691 env[1338]: time="2024-02-08T23:40:42.299675794Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.299769 env[1338]: time="2024-02-08T23:40:42.299754905Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.299835 env[1338]: time="2024-02-08T23:40:42.299821915Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.299899 env[1338]: time="2024-02-08T23:40:42.299888125Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.299959 env[1338]: time="2024-02-08T23:40:42.299947133Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.300043 env[1338]: time="2024-02-08T23:40:42.300028345Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.300111 env[1338]: time="2024-02-08T23:40:42.300098155Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.300214 env[1338]: time="2024-02-08T23:40:42.300167165Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:40:42.300559 env[1338]: time="2024-02-08T23:40:42.300538420Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:40:42.300729 env[1338]: time="2024-02-08T23:40:42.300713545Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:40:42.301200 env[1338]: time="2024-02-08T23:40:42.301163411Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301353039Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301378542Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301450453Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301467655Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301483658Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301572971Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301592374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301610276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301626278Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301644381Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.302405 env[1338]: time="2024-02-08T23:40:42.301664284Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:40:42.302977 env[1338]: time="2024-02-08T23:40:42.302953472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.303070 env[1338]: time="2024-02-08T23:40:42.303053987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.303147 env[1338]: time="2024-02-08T23:40:42.303132798Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.303393 env[1338]: time="2024-02-08T23:40:42.303371733Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:40:42.303496 env[1338]: time="2024-02-08T23:40:42.303475648Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:40:42.303569 env[1338]: time="2024-02-08T23:40:42.303555260Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:40:42.309552 env[1338]: time="2024-02-08T23:40:42.303618569Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:40:42.309552 env[1338]: time="2024-02-08T23:40:42.303669577Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:40:42.309642 env[1338]: time="2024-02-08T23:40:42.303964420Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:40:42.309642 env[1338]: time="2024-02-08T23:40:42.304042231Z" level=info msg="Connect containerd service" Feb 8 23:40:42.309642 env[1338]: time="2024-02-08T23:40:42.304087538Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.314848908Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315006931Z" level=info msg="Start subscribing containerd event" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315076342Z" level=info msg="Start recovering state" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315158053Z" level=info msg="Start event monitor" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315212561Z" level=info msg="Start snapshots syncer" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315231964Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315246166Z" level=info msg="Start streaming server" Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315783545Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:40:42.346065 env[1338]: time="2024-02-08T23:40:42.315915464Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:40:42.327260 dbus-daemon[1311]: [system] SELinux support is enabled Feb 8 23:40:42.346699 bash[1371]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:40:42.316072 systemd[1]: Started containerd.service. Feb 8 23:40:42.346879 tar[1332]: ./static Feb 8 23:40:42.333982 dbus-daemon[1311]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:40:42.325573 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:40:42.328401 systemd[1]: Started dbus.service. Feb 8 23:40:42.333225 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:40:42.333255 systemd[1]: Reached target system-config.target. Feb 8 23:40:42.335916 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:40:42.335939 systemd[1]: Reached target user-config.target. Feb 8 23:40:42.338587 systemd[1]: Started systemd-logind.service. Feb 8 23:40:42.349992 env[1338]: time="2024-02-08T23:40:42.349851217Z" level=info msg="containerd successfully booted in 0.235204s" Feb 8 23:40:42.437808 tar[1332]: ./vlan Feb 8 23:40:42.445129 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:40:42.560100 tar[1332]: ./portmap Feb 8 23:40:42.679684 tar[1332]: ./host-local Feb 8 23:40:42.779367 tar[1332]: ./vrf Feb 8 23:40:42.855163 tar[1332]: ./bridge Feb 8 23:40:42.936913 tar[1332]: ./tuning Feb 8 23:40:43.008390 update_engine[1327]: I0208 23:40:43.007596 1327 main.cc:92] Flatcar Update Engine starting Feb 8 23:40:43.011739 tar[1332]: ./firewall Feb 8 23:40:43.095005 systemd[1]: Started update-engine.service. Feb 8 23:40:43.100715 systemd[1]: Started locksmithd.service. Feb 8 23:40:43.104339 update_engine[1327]: I0208 23:40:43.104297 1327 update_check_scheduler.cc:74] Next update check in 11m21s Feb 8 23:40:43.122300 tar[1332]: ./host-device Feb 8 23:40:43.140985 systemd[1]: Finished prepare-critools.service. Feb 8 23:40:43.177325 tar[1332]: ./sbr Feb 8 23:40:43.228193 tar[1332]: ./loopback Feb 8 23:40:43.300874 tar[1332]: ./dhcp Feb 8 23:40:43.348929 tar[1334]: linux-amd64/LICENSE Feb 8 23:40:43.349453 tar[1334]: linux-amd64/README.md Feb 8 23:40:43.356599 systemd[1]: Finished prepare-helm.service. Feb 8 23:40:43.434273 tar[1332]: ./ptp Feb 8 23:40:43.476780 tar[1332]: ./ipvlan Feb 8 23:40:43.519152 tar[1332]: ./bandwidth Feb 8 23:40:43.660755 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:40:43.775124 sshd_keygen[1335]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:40:43.795024 systemd[1]: Finished sshd-keygen.service. Feb 8 23:40:43.799615 systemd[1]: Starting issuegen.service... Feb 8 23:40:43.803847 systemd[1]: Started waagent.service. Feb 8 23:40:43.806947 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:40:43.807344 systemd[1]: Finished issuegen.service. Feb 8 23:40:43.811012 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:40:43.817380 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:40:43.821296 systemd[1]: Started getty@tty1.service. Feb 8 23:40:43.825260 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:40:43.828034 systemd[1]: Reached target getty.target. Feb 8 23:40:43.830306 systemd[1]: Reached target multi-user.target. Feb 8 23:40:43.833877 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:40:43.841039 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:40:43.841227 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:40:43.844481 systemd[1]: Startup finished in 1.017s (firmware) + 27.799s (loader) + 918ms (kernel) + 17.200s (initrd) + 27.144s (userspace) = 1min 14.080s. Feb 8 23:40:44.633439 login[1446]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:40:44.636810 login[1447]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:40:44.986231 systemd[1]: Created slice user-500.slice. Feb 8 23:40:44.987693 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:40:44.995238 systemd-logind[1326]: New session 2 of user core. Feb 8 23:40:44.998720 systemd-logind[1326]: New session 1 of user core. Feb 8 23:40:45.002394 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:40:45.004041 systemd[1]: Starting user@500.service... Feb 8 23:40:45.007285 (systemd)[1453]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:40:45.183239 systemd[1453]: Queued start job for default target default.target. Feb 8 23:40:45.183829 systemd[1453]: Reached target paths.target. Feb 8 23:40:45.183856 systemd[1453]: Reached target sockets.target. Feb 8 23:40:45.183872 systemd[1453]: Reached target timers.target. Feb 8 23:40:45.183889 systemd[1453]: Reached target basic.target. Feb 8 23:40:45.184023 systemd[1]: Started user@500.service. Feb 8 23:40:45.185238 systemd[1]: Started session-1.scope. Feb 8 23:40:45.185979 systemd[1]: Started session-2.scope. Feb 8 23:40:45.187439 systemd[1453]: Reached target default.target. Feb 8 23:40:45.187645 systemd[1453]: Startup finished in 174ms. Feb 8 23:40:45.662649 locksmithd[1428]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:40:51.901631 waagent[1442]: 2024-02-08T23:40:51.901511Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:40:51.914865 waagent[1442]: 2024-02-08T23:40:51.902887Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:40:51.914865 waagent[1442]: 2024-02-08T23:40:51.903949Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:40:51.914865 waagent[1442]: 2024-02-08T23:40:51.905106Z INFO Daemon Daemon Run daemon Feb 8 23:40:51.914865 waagent[1442]: 2024-02-08T23:40:51.905980Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:40:51.919017 waagent[1442]: 2024-02-08T23:40:51.918907Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:40:51.926193 waagent[1442]: 2024-02-08T23:40:51.926072Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.926583Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.927543Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.928946Z INFO Daemon Daemon Activate resource disk Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.929712Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.937499Z INFO Daemon Daemon Found device: None Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.938318Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.939159Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.940901Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:40:51.957210 waagent[1442]: 2024-02-08T23:40:51.941951Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:40:51.958974 waagent[1442]: 2024-02-08T23:40:51.958838Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:40:51.966032 waagent[1442]: 2024-02-08T23:40:51.965912Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:40:51.974524 waagent[1442]: 2024-02-08T23:40:51.966349Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:40:51.974524 waagent[1442]: 2024-02-08T23:40:51.967705Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:40:51.991158 waagent[1442]: 2024-02-08T23:40:51.990578Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:40:52.105690 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:40:52.110987 waagent[1442]: 2024-02-08T23:40:52.110849Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:40:52.126484 waagent[1442]: 2024-02-08T23:40:52.111445Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:40:52.126484 waagent[1442]: 2024-02-08T23:40:52.112632Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:40:52.126484 waagent[1442]: 2024-02-08T23:40:52.113598Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:40:52.126484 waagent[1442]: 2024-02-08T23:40:52.114763Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:40:52.126484 waagent[1442]: 2024-02-08T23:40:52.115550Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:40:52.225065 waagent[1442]: 2024-02-08T23:40:52.224910Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:40:52.233884 waagent[1442]: 2024-02-08T23:40:52.225890Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:40:52.233884 waagent[1442]: 2024-02-08T23:40:52.226946Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:40:52.875169 waagent[1442]: 2024-02-08T23:40:52.875010Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:40:52.888103 waagent[1442]: 2024-02-08T23:40:52.888016Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:40:52.891479 waagent[1442]: 2024-02-08T23:40:52.891416Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:40:52.971498 waagent[1442]: 2024-02-08T23:40:52.971373Z INFO Daemon Daemon Found private key matching thumbprint 1B85D7F09F539B70265A6F8F16560228817F4A6F Feb 8 23:40:52.983670 waagent[1442]: 2024-02-08T23:40:52.971997Z INFO Daemon Daemon Certificate with thumbprint 56EA11EEE05D77C2F16CE143414DC1663BA35ECB has no matching private key. Feb 8 23:40:52.983670 waagent[1442]: 2024-02-08T23:40:52.973196Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:40:53.024287 waagent[1442]: 2024-02-08T23:40:53.024169Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f35f661f-277c-4f14-a16d-f4e2f2340365 New eTag: 16830165379320145836] Feb 8 23:40:53.032742 waagent[1442]: 2024-02-08T23:40:53.025354Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:40:53.036414 waagent[1442]: 2024-02-08T23:40:53.036359Z INFO Daemon Daemon Starting provisioning Feb 8 23:40:53.043403 waagent[1442]: 2024-02-08T23:40:53.036655Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:40:53.043403 waagent[1442]: 2024-02-08T23:40:53.037885Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-65dd02f9dc] Feb 8 23:40:53.057244 waagent[1442]: 2024-02-08T23:40:53.057119Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-65dd02f9dc] Feb 8 23:40:53.064581 waagent[1442]: 2024-02-08T23:40:53.057785Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:40:53.064581 waagent[1442]: 2024-02-08T23:40:53.058981Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:40:53.072898 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:40:53.073167 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:40:53.073259 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:40:53.073585 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:40:53.077223 systemd-networkd[1198]: eth0: DHCPv6 lease lost Feb 8 23:40:53.078693 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:40:53.078904 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:40:53.081531 systemd[1]: Starting systemd-networkd.service... Feb 8 23:40:53.112468 systemd-networkd[1493]: enP64097s1: Link UP Feb 8 23:40:53.112477 systemd-networkd[1493]: enP64097s1: Gained carrier Feb 8 23:40:53.113702 systemd-networkd[1493]: eth0: Link UP Feb 8 23:40:53.113710 systemd-networkd[1493]: eth0: Gained carrier Feb 8 23:40:53.114112 systemd-networkd[1493]: lo: Link UP Feb 8 23:40:53.114120 systemd-networkd[1493]: lo: Gained carrier Feb 8 23:40:53.114447 systemd-networkd[1493]: eth0: Gained IPv6LL Feb 8 23:40:53.115472 systemd-networkd[1493]: Enumeration completed Feb 8 23:40:53.115557 systemd[1]: Started systemd-networkd.service. Feb 8 23:40:53.117526 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:40:53.127413 waagent[1442]: 2024-02-08T23:40:53.119845Z INFO Daemon Daemon Create user account if not exists Feb 8 23:40:53.127413 waagent[1442]: 2024-02-08T23:40:53.120644Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:40:53.127413 waagent[1442]: 2024-02-08T23:40:53.121655Z INFO Daemon Daemon Configure sudoer Feb 8 23:40:53.127413 waagent[1442]: 2024-02-08T23:40:53.123401Z INFO Daemon Daemon Configure sshd Feb 8 23:40:53.127413 waagent[1442]: 2024-02-08T23:40:53.124410Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:40:53.128140 systemd-networkd[1493]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:40:53.169521 waagent[1442]: 2024-02-08T23:40:53.169417Z INFO Daemon Daemon Decode custom data Feb 8 23:40:53.174422 waagent[1442]: 2024-02-08T23:40:53.170016Z INFO Daemon Daemon Save custom data Feb 8 23:40:53.184289 systemd-networkd[1493]: eth0: DHCPv4 address 10.200.8.20/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:40:53.188094 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:41:20.021656 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:41:23.453305 waagent[1442]: 2024-02-08T23:41:23.453199Z INFO Daemon Daemon Provisioning complete Feb 8 23:41:23.469014 waagent[1442]: 2024-02-08T23:41:23.468940Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:41:23.472462 waagent[1442]: 2024-02-08T23:41:23.472393Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:41:23.478434 waagent[1442]: 2024-02-08T23:41:23.478371Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:41:23.740859 waagent[1502]: 2024-02-08T23:41:23.740679Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:41:23.741590 waagent[1502]: 2024-02-08T23:41:23.741520Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:23.741733 waagent[1502]: 2024-02-08T23:41:23.741680Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:23.753028 waagent[1502]: 2024-02-08T23:41:23.752951Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:41:23.753196 waagent[1502]: 2024-02-08T23:41:23.753128Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:41:23.813876 waagent[1502]: 2024-02-08T23:41:23.813751Z INFO ExtHandler ExtHandler Found private key matching thumbprint 1B85D7F09F539B70265A6F8F16560228817F4A6F Feb 8 23:41:23.814105 waagent[1502]: 2024-02-08T23:41:23.814043Z INFO ExtHandler ExtHandler Certificate with thumbprint 56EA11EEE05D77C2F16CE143414DC1663BA35ECB has no matching private key. Feb 8 23:41:23.814362 waagent[1502]: 2024-02-08T23:41:23.814310Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:41:23.833470 waagent[1502]: 2024-02-08T23:41:23.833405Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 803e7102-2737-488c-a5ea-abed115c2bb3 New eTag: 16830165379320145836] Feb 8 23:41:23.834045 waagent[1502]: 2024-02-08T23:41:23.833985Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:41:23.919489 waagent[1502]: 2024-02-08T23:41:23.919318Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:41:23.950993 waagent[1502]: 2024-02-08T23:41:23.950891Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1502 Feb 8 23:41:23.954545 waagent[1502]: 2024-02-08T23:41:23.954470Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:41:23.955809 waagent[1502]: 2024-02-08T23:41:23.955750Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:41:24.057749 waagent[1502]: 2024-02-08T23:41:24.057588Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:41:24.058235 waagent[1502]: 2024-02-08T23:41:24.058132Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:41:24.066893 waagent[1502]: 2024-02-08T23:41:24.066833Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:41:24.067418 waagent[1502]: 2024-02-08T23:41:24.067356Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:41:24.068523 waagent[1502]: 2024-02-08T23:41:24.068458Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:41:24.069777 waagent[1502]: 2024-02-08T23:41:24.069717Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:41:24.070197 waagent[1502]: 2024-02-08T23:41:24.070129Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:24.070359 waagent[1502]: 2024-02-08T23:41:24.070310Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:24.070875 waagent[1502]: 2024-02-08T23:41:24.070818Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:41:24.071156 waagent[1502]: 2024-02-08T23:41:24.071100Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:41:24.071156 waagent[1502]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:41:24.071156 waagent[1502]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:41:24.071156 waagent[1502]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:41:24.071156 waagent[1502]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:24.071156 waagent[1502]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:24.071156 waagent[1502]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:24.074340 waagent[1502]: 2024-02-08T23:41:24.074123Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:41:24.074474 waagent[1502]: 2024-02-08T23:41:24.074409Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:24.075436 waagent[1502]: 2024-02-08T23:41:24.075377Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:24.076080 waagent[1502]: 2024-02-08T23:41:24.076020Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:41:24.076239 waagent[1502]: 2024-02-08T23:41:24.076190Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:41:24.076377 waagent[1502]: 2024-02-08T23:41:24.076334Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:41:24.077693 waagent[1502]: 2024-02-08T23:41:24.077638Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:41:24.077850 waagent[1502]: 2024-02-08T23:41:24.077799Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:41:24.079043 waagent[1502]: 2024-02-08T23:41:24.078979Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:41:24.079230 waagent[1502]: 2024-02-08T23:41:24.079156Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:41:24.079487 waagent[1502]: 2024-02-08T23:41:24.079437Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:41:24.092585 waagent[1502]: 2024-02-08T23:41:24.092531Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:41:24.093320 waagent[1502]: 2024-02-08T23:41:24.093273Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:41:24.094220 waagent[1502]: 2024-02-08T23:41:24.094152Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:41:24.121976 waagent[1502]: 2024-02-08T23:41:24.121851Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1493' Feb 8 23:41:24.141849 waagent[1502]: 2024-02-08T23:41:24.141757Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:41:24.275842 waagent[1502]: 2024-02-08T23:41:24.275714Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:41:24.275842 waagent[1502]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:41:24.275842 waagent[1502]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:41:24.275842 waagent[1502]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:22:6d brd ff:ff:ff:ff:ff:ff Feb 8 23:41:24.275842 waagent[1502]: 3: enP64097s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:22:6d brd ff:ff:ff:ff:ff:ff\ altname enP64097p0s2 Feb 8 23:41:24.275842 waagent[1502]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:41:24.275842 waagent[1502]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:41:24.275842 waagent[1502]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:41:24.275842 waagent[1502]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:41:24.275842 waagent[1502]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:41:24.275842 waagent[1502]: 2: eth0 inet6 fe80::222:48ff:fe9b:226d/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:41:24.446863 waagent[1502]: 2024-02-08T23:41:24.446717Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 8 23:41:24.450520 waagent[1502]: 2024-02-08T23:41:24.450409Z INFO EnvHandler ExtHandler Firewall rules: Feb 8 23:41:24.450520 waagent[1502]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:24.450520 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:41:24.450520 waagent[1502]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:24.450520 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:41:24.450520 waagent[1502]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:24.450520 waagent[1502]: pkts bytes target prot opt in out source destination Feb 8 23:41:24.450520 waagent[1502]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:41:24.450520 waagent[1502]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:41:24.451914 waagent[1502]: 2024-02-08T23:41:24.451858Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:41:24.499278 waagent[1502]: 2024-02-08T23:41:24.499213Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:41:25.483040 waagent[1442]: 2024-02-08T23:41:25.482824Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:41:25.490999 waagent[1442]: 2024-02-08T23:41:25.490934Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:41:26.498285 waagent[1542]: 2024-02-08T23:41:26.498152Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:41:26.498987 waagent[1542]: 2024-02-08T23:41:26.498915Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:41:26.499137 waagent[1542]: 2024-02-08T23:41:26.499083Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:41:26.508612 waagent[1542]: 2024-02-08T23:41:26.508514Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:41:26.508974 waagent[1542]: 2024-02-08T23:41:26.508918Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:26.509131 waagent[1542]: 2024-02-08T23:41:26.509082Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:26.520778 waagent[1542]: 2024-02-08T23:41:26.520705Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:41:26.529514 waagent[1542]: 2024-02-08T23:41:26.529456Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:41:26.530415 waagent[1542]: 2024-02-08T23:41:26.530358Z INFO ExtHandler Feb 8 23:41:26.530559 waagent[1542]: 2024-02-08T23:41:26.530511Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 6a66bb53-1819-43e1-85a2-b990f56ff23e eTag: 16830165379320145836 source: Fabric] Feb 8 23:41:26.531261 waagent[1542]: 2024-02-08T23:41:26.531206Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:41:26.532344 waagent[1542]: 2024-02-08T23:41:26.532285Z INFO ExtHandler Feb 8 23:41:26.532476 waagent[1542]: 2024-02-08T23:41:26.532426Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:41:26.539089 waagent[1542]: 2024-02-08T23:41:26.539036Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:41:26.539518 waagent[1542]: 2024-02-08T23:41:26.539471Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:41:26.557474 waagent[1542]: 2024-02-08T23:41:26.557417Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:41:26.621023 waagent[1542]: 2024-02-08T23:41:26.620890Z INFO ExtHandler Downloaded certificate {'thumbprint': '56EA11EEE05D77C2F16CE143414DC1663BA35ECB', 'hasPrivateKey': False} Feb 8 23:41:26.622014 waagent[1542]: 2024-02-08T23:41:26.621947Z INFO ExtHandler Downloaded certificate {'thumbprint': '1B85D7F09F539B70265A6F8F16560228817F4A6F', 'hasPrivateKey': True} Feb 8 23:41:26.622991 waagent[1542]: 2024-02-08T23:41:26.622930Z INFO ExtHandler Fetch goal state completed Feb 8 23:41:26.642935 waagent[1542]: 2024-02-08T23:41:26.642864Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1542 Feb 8 23:41:26.646141 waagent[1542]: 2024-02-08T23:41:26.646078Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:41:26.647604 waagent[1542]: 2024-02-08T23:41:26.647548Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:41:26.652481 waagent[1542]: 2024-02-08T23:41:26.652428Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:41:26.652838 waagent[1542]: 2024-02-08T23:41:26.652782Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:41:26.661015 waagent[1542]: 2024-02-08T23:41:26.660961Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:41:26.661481 waagent[1542]: 2024-02-08T23:41:26.661427Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:41:26.685609 waagent[1542]: 2024-02-08T23:41:26.685515Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 8 23:41:26.688267 waagent[1542]: 2024-02-08T23:41:26.688154Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 8 23:41:26.692878 waagent[1542]: 2024-02-08T23:41:26.692818Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:41:26.694249 waagent[1542]: 2024-02-08T23:41:26.694191Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:41:26.694744 waagent[1542]: 2024-02-08T23:41:26.694689Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:26.695147 waagent[1542]: 2024-02-08T23:41:26.695090Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:41:26.695596 waagent[1542]: 2024-02-08T23:41:26.695543Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:26.696081 waagent[1542]: 2024-02-08T23:41:26.696032Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:41:26.696240 waagent[1542]: 2024-02-08T23:41:26.696160Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:41:26.696804 waagent[1542]: 2024-02-08T23:41:26.696751Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:41:26.697248 waagent[1542]: 2024-02-08T23:41:26.697194Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:41:26.697719 waagent[1542]: 2024-02-08T23:41:26.697667Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:41:26.697792 waagent[1542]: 2024-02-08T23:41:26.697733Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:41:26.697792 waagent[1542]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:41:26.697792 waagent[1542]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:41:26.697792 waagent[1542]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:41:26.697792 waagent[1542]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:26.697792 waagent[1542]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:26.697792 waagent[1542]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:41:26.700509 waagent[1542]: 2024-02-08T23:41:26.700428Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:41:26.700975 waagent[1542]: 2024-02-08T23:41:26.700907Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:41:26.701134 waagent[1542]: 2024-02-08T23:41:26.701076Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:41:26.704349 waagent[1542]: 2024-02-08T23:41:26.704224Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:41:26.704632 waagent[1542]: 2024-02-08T23:41:26.704568Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:41:26.709279 waagent[1542]: 2024-02-08T23:41:26.709205Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:41:26.728126 waagent[1542]: 2024-02-08T23:41:26.728010Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:41:26.728126 waagent[1542]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:41:26.728126 waagent[1542]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:41:26.728126 waagent[1542]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:22:6d brd ff:ff:ff:ff:ff:ff Feb 8 23:41:26.728126 waagent[1542]: 3: enP64097s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:22:6d brd ff:ff:ff:ff:ff:ff\ altname enP64097p0s2 Feb 8 23:41:26.728126 waagent[1542]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:41:26.728126 waagent[1542]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:41:26.728126 waagent[1542]: 2: eth0 inet 10.200.8.20/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:41:26.728126 waagent[1542]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:41:26.728126 waagent[1542]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:41:26.728126 waagent[1542]: 2: eth0 inet6 fe80::222:48ff:fe9b:226d/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:41:26.729952 waagent[1542]: 2024-02-08T23:41:26.729892Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:41:26.732698 waagent[1542]: 2024-02-08T23:41:26.732454Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:41:26.772285 waagent[1542]: 2024-02-08T23:41:26.772167Z INFO ExtHandler ExtHandler Feb 8 23:41:26.773167 waagent[1542]: 2024-02-08T23:41:26.773111Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 6ff07a03-bf10-416a-807b-b24c270e69d2 correlation e98b2534-ea92-4c18-becc-2b0d035b2c14 created: 2024-02-08T23:39:18.596069Z] Feb 8 23:41:26.784096 waagent[1542]: 2024-02-08T23:41:26.784029Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:41:26.786421 waagent[1542]: 2024-02-08T23:41:26.786364Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 14 ms] Feb 8 23:41:26.799325 waagent[1542]: 2024-02-08T23:41:26.799264Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:41:26.799325 waagent[1542]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:26.799325 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:41:26.799325 waagent[1542]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:26.799325 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:41:26.799325 waagent[1542]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:41:26.799325 waagent[1542]: pkts bytes target prot opt in out source destination Feb 8 23:41:26.799325 waagent[1542]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:41:26.799325 waagent[1542]: 108 13798 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:41:26.799325 waagent[1542]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:41:26.810835 waagent[1542]: 2024-02-08T23:41:26.810771Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:41:26.822477 waagent[1542]: 2024-02-08T23:41:26.822416Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: D208B7D6-6E77-4FF9-A8ED-1D9E33EDE64A;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:41:28.617042 update_engine[1327]: I0208 23:41:28.616947 1327 update_attempter.cc:509] Updating boot flags... Feb 8 23:41:56.385682 systemd[1]: Created slice system-sshd.slice. Feb 8 23:41:56.387460 systemd[1]: Started sshd@0-10.200.8.20:22-10.200.12.6:42174.service. Feb 8 23:41:57.266105 sshd[1650]: Accepted publickey for core from 10.200.12.6 port 42174 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:57.267833 sshd[1650]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:57.273006 systemd[1]: Started session-3.scope. Feb 8 23:41:57.273482 systemd-logind[1326]: New session 3 of user core. Feb 8 23:41:57.803582 systemd[1]: Started sshd@1-10.200.8.20:22-10.200.12.6:38472.service. Feb 8 23:41:58.421128 sshd[1655]: Accepted publickey for core from 10.200.12.6 port 38472 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:58.422857 sshd[1655]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:58.428685 systemd[1]: Started session-4.scope. Feb 8 23:41:58.429353 systemd-logind[1326]: New session 4 of user core. Feb 8 23:41:58.862161 sshd[1655]: pam_unix(sshd:session): session closed for user core Feb 8 23:41:58.865457 systemd[1]: sshd@1-10.200.8.20:22-10.200.12.6:38472.service: Deactivated successfully. Feb 8 23:41:58.866507 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:41:58.867126 systemd-logind[1326]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:41:58.867910 systemd-logind[1326]: Removed session 4. Feb 8 23:41:58.965926 systemd[1]: Started sshd@2-10.200.8.20:22-10.200.12.6:38482.service. Feb 8 23:41:59.578749 sshd[1661]: Accepted publickey for core from 10.200.12.6 port 38482 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:41:59.580523 sshd[1661]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:41:59.586359 systemd[1]: Started session-5.scope. Feb 8 23:41:59.587089 systemd-logind[1326]: New session 5 of user core. Feb 8 23:42:00.016109 sshd[1661]: pam_unix(sshd:session): session closed for user core Feb 8 23:42:00.019572 systemd[1]: sshd@2-10.200.8.20:22-10.200.12.6:38482.service: Deactivated successfully. Feb 8 23:42:00.020621 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:42:00.021441 systemd-logind[1326]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:42:00.022385 systemd-logind[1326]: Removed session 5. Feb 8 23:42:00.121618 systemd[1]: Started sshd@3-10.200.8.20:22-10.200.12.6:38486.service. Feb 8 23:42:00.766730 sshd[1667]: Accepted publickey for core from 10.200.12.6 port 38486 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:42:00.768485 sshd[1667]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:42:00.773461 systemd[1]: Started session-6.scope. Feb 8 23:42:00.774058 systemd-logind[1326]: New session 6 of user core. Feb 8 23:42:01.210654 sshd[1667]: pam_unix(sshd:session): session closed for user core Feb 8 23:42:01.213973 systemd[1]: sshd@3-10.200.8.20:22-10.200.12.6:38486.service: Deactivated successfully. Feb 8 23:42:01.215010 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:42:01.215818 systemd-logind[1326]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:42:01.216749 systemd-logind[1326]: Removed session 6. Feb 8 23:42:01.315803 systemd[1]: Started sshd@4-10.200.8.20:22-10.200.12.6:38496.service. Feb 8 23:42:01.934790 sshd[1673]: Accepted publickey for core from 10.200.12.6 port 38496 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:42:01.936543 sshd[1673]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:42:01.942344 systemd[1]: Started session-7.scope. Feb 8 23:42:01.942989 systemd-logind[1326]: New session 7 of user core. Feb 8 23:42:02.542609 sudo[1676]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:42:02.542943 sudo[1676]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:42:03.608347 systemd[1]: Starting docker.service... Feb 8 23:42:03.665070 env[1691]: time="2024-02-08T23:42:03.664996529Z" level=info msg="Starting up" Feb 8 23:42:03.666339 env[1691]: time="2024-02-08T23:42:03.666314851Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:42:03.666468 env[1691]: time="2024-02-08T23:42:03.666455653Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:42:03.666533 env[1691]: time="2024-02-08T23:42:03.666521854Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:42:03.666574 env[1691]: time="2024-02-08T23:42:03.666566455Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:42:03.675185 env[1691]: time="2024-02-08T23:42:03.675143297Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:42:03.675296 env[1691]: time="2024-02-08T23:42:03.675167897Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:42:03.675296 env[1691]: time="2024-02-08T23:42:03.675220198Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:42:03.675296 env[1691]: time="2024-02-08T23:42:03.675235098Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:42:03.774810 env[1691]: time="2024-02-08T23:42:03.774693140Z" level=info msg="Loading containers: start." Feb 8 23:42:03.880314 kernel: Initializing XFRM netlink socket Feb 8 23:42:03.910489 env[1691]: time="2024-02-08T23:42:03.910438581Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:42:04.035663 systemd-networkd[1493]: docker0: Link UP Feb 8 23:42:04.054318 env[1691]: time="2024-02-08T23:42:04.054275633Z" level=info msg="Loading containers: done." Feb 8 23:42:04.066369 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3345634790-merged.mount: Deactivated successfully. Feb 8 23:42:04.079799 env[1691]: time="2024-02-08T23:42:04.079751643Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:42:04.079993 env[1691]: time="2024-02-08T23:42:04.079969847Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:42:04.080117 env[1691]: time="2024-02-08T23:42:04.080095549Z" level=info msg="Daemon has completed initialization" Feb 8 23:42:04.105841 systemd[1]: Started docker.service. Feb 8 23:42:04.115898 env[1691]: time="2024-02-08T23:42:04.115834024Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:42:04.133110 systemd[1]: Reloading. Feb 8 23:42:04.196835 /usr/lib/systemd/system-generators/torcx-generator[1820]: time="2024-02-08T23:42:04Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:42:04.196875 /usr/lib/systemd/system-generators/torcx-generator[1820]: time="2024-02-08T23:42:04Z" level=info msg="torcx already run" Feb 8 23:42:04.297583 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:42:04.297602 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:42:04.313685 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:42:04.400617 systemd[1]: Started kubelet.service. Feb 8 23:42:04.477905 kubelet[1881]: E0208 23:42:04.477457 1881 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:42:04.479682 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:42:04.479798 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:42:08.557254 env[1338]: time="2024-02-08T23:42:08.557169831Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\"" Feb 8 23:42:09.180396 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount208046007.mount: Deactivated successfully. Feb 8 23:42:11.268913 env[1338]: time="2024-02-08T23:42:11.268847324Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:11.275954 env[1338]: time="2024-02-08T23:42:11.275906420Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:11.279539 env[1338]: time="2024-02-08T23:42:11.279505769Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:11.283221 env[1338]: time="2024-02-08T23:42:11.283189519Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:2f28bed4096abd572a56595ac0304238bdc271dcfe22c650707c09bf97ec16fd,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:11.283870 env[1338]: time="2024-02-08T23:42:11.283837327Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.26.13\" returns image reference \"sha256:84900298406b2df97ade16b73c49c2b73265ded8735ac19a4e20c2a4ad65853f\"" Feb 8 23:42:11.293952 env[1338]: time="2024-02-08T23:42:11.293924164Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\"" Feb 8 23:42:13.268194 env[1338]: time="2024-02-08T23:42:13.268115923Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:13.275643 env[1338]: time="2024-02-08T23:42:13.275594620Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:13.280531 env[1338]: time="2024-02-08T23:42:13.280491883Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:13.285056 env[1338]: time="2024-02-08T23:42:13.285023441Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fda420c6c15cdd01c4eba3404f0662fe486a9c7f38fa13c741a21334673841a2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:13.285696 env[1338]: time="2024-02-08T23:42:13.285663850Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.26.13\" returns image reference \"sha256:921f237b560bdb02300f82d3606635d395b20635512fab10f0191cff42079486\"" Feb 8 23:42:13.296192 env[1338]: time="2024-02-08T23:42:13.296155285Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\"" Feb 8 23:42:14.556771 env[1338]: time="2024-02-08T23:42:14.556710511Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:14.562308 env[1338]: time="2024-02-08T23:42:14.562260081Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:14.567053 env[1338]: time="2024-02-08T23:42:14.567017441Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:14.570618 env[1338]: time="2024-02-08T23:42:14.570586386Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:c3c7303ee6d01c8e5a769db28661cf854b55175aa72c67e9b6a7b9d47ac42af3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:14.571222 env[1338]: time="2024-02-08T23:42:14.571187193Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.26.13\" returns image reference \"sha256:4fe82b56f06250b6b7eb3d5a879cd2cfabf41cb3e45b24af6059eadbc3b8026e\"" Feb 8 23:42:14.581379 env[1338]: time="2024-02-08T23:42:14.581335121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 8 23:42:14.692441 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:42:14.692775 systemd[1]: Stopped kubelet.service. Feb 8 23:42:14.695041 systemd[1]: Started kubelet.service. Feb 8 23:42:14.745193 kubelet[1913]: E0208 23:42:14.745121 1913 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:42:14.748363 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:42:14.748530 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:42:15.612567 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3355559076.mount: Deactivated successfully. Feb 8 23:42:16.084190 env[1338]: time="2024-02-08T23:42:16.084111146Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.093846 env[1338]: time="2024-02-08T23:42:16.093792263Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.098554 env[1338]: time="2024-02-08T23:42:16.098513219Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.103462 env[1338]: time="2024-02-08T23:42:16.103423079Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.103902 env[1338]: time="2024-02-08T23:42:16.103864184Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 8 23:42:16.114996 env[1338]: time="2024-02-08T23:42:16.114964518Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:42:16.626545 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount153737672.mount: Deactivated successfully. Feb 8 23:42:16.643691 env[1338]: time="2024-02-08T23:42:16.643635286Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.650070 env[1338]: time="2024-02-08T23:42:16.650030763Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.654325 env[1338]: time="2024-02-08T23:42:16.654294914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.659150 env[1338]: time="2024-02-08T23:42:16.659120173Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:16.659607 env[1338]: time="2024-02-08T23:42:16.659573278Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:42:16.670155 env[1338]: time="2024-02-08T23:42:16.670122505Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\"" Feb 8 23:42:17.390506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2111762656.mount: Deactivated successfully. Feb 8 23:42:21.858770 env[1338]: time="2024-02-08T23:42:21.858710466Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:21.865070 env[1338]: time="2024-02-08T23:42:21.865028734Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:21.869287 env[1338]: time="2024-02-08T23:42:21.869164879Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.6-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:21.874214 env[1338]: time="2024-02-08T23:42:21.874185133Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:dd75ec974b0a2a6f6bb47001ba09207976e625db898d1b16735528c009cb171c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:21.874740 env[1338]: time="2024-02-08T23:42:21.874710238Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.6-0\" returns image reference \"sha256:fce326961ae2d51a5f726883fd59d2a8c2ccc3e45d3bb859882db58e422e59e7\"" Feb 8 23:42:21.885024 env[1338]: time="2024-02-08T23:42:21.884996949Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\"" Feb 8 23:42:22.535066 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2116144304.mount: Deactivated successfully. Feb 8 23:42:23.151229 env[1338]: time="2024-02-08T23:42:23.151157477Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:23.158826 env[1338]: time="2024-02-08T23:42:23.158786056Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:23.162950 env[1338]: time="2024-02-08T23:42:23.162910698Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.9.3,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:23.168108 env[1338]: time="2024-02-08T23:42:23.168076452Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:8e352a029d304ca7431c6507b56800636c321cb52289686a581ab70aaa8a2e2a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:23.168614 env[1338]: time="2024-02-08T23:42:23.168582057Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.9.3\" returns image reference \"sha256:5185b96f0becf59032b8e3646e99f84d9655dff3ac9e2605e0dc77f9c441ae4a\"" Feb 8 23:42:24.942538 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:42:24.942821 systemd[1]: Stopped kubelet.service. Feb 8 23:42:24.950365 systemd[1]: Started kubelet.service. Feb 8 23:42:25.025642 kubelet[1990]: E0208 23:42:25.025469 1990 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 8 23:42:25.028388 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:42:25.028551 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:42:26.092708 systemd[1]: Stopped kubelet.service. Feb 8 23:42:26.107156 systemd[1]: Reloading. Feb 8 23:42:26.172490 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2024-02-08T23:42:26Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:42:26.175959 /usr/lib/systemd/system-generators/torcx-generator[2020]: time="2024-02-08T23:42:26Z" level=info msg="torcx already run" Feb 8 23:42:26.271859 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:42:26.271881 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:42:26.287854 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:42:26.383351 systemd[1]: Started kubelet.service. Feb 8 23:42:26.438864 kubelet[2083]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:42:26.438864 kubelet[2083]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:42:26.439399 kubelet[2083]: I0208 23:42:26.438938 2083 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:42:26.440339 kubelet[2083]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:42:26.440339 kubelet[2083]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:42:26.779223 kubelet[2083]: I0208 23:42:26.779169 2083 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:42:26.779223 kubelet[2083]: I0208 23:42:26.779212 2083 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:42:26.779531 kubelet[2083]: I0208 23:42:26.779508 2083 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:42:26.782755 kubelet[2083]: E0208 23:42:26.782732 2083 certificate_manager.go:471] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.20:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.782944 kubelet[2083]: I0208 23:42:26.782924 2083 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:42:26.785591 kubelet[2083]: I0208 23:42:26.785566 2083 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:42:26.785835 kubelet[2083]: I0208 23:42:26.785818 2083 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:42:26.785932 kubelet[2083]: I0208 23:42:26.785917 2083 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:42:26.786072 kubelet[2083]: I0208 23:42:26.785948 2083 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:42:26.786072 kubelet[2083]: I0208 23:42:26.785964 2083 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:42:26.786159 kubelet[2083]: I0208 23:42:26.786083 2083 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:42:26.793097 kubelet[2083]: I0208 23:42:26.793079 2083 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:42:26.793203 kubelet[2083]: I0208 23:42:26.793102 2083 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:42:26.793203 kubelet[2083]: I0208 23:42:26.793128 2083 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:42:26.793203 kubelet[2083]: I0208 23:42:26.793148 2083 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:42:26.797446 kubelet[2083]: W0208 23:42:26.797405 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.797626 kubelet[2083]: E0208 23:42:26.797606 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.797626 kubelet[2083]: I0208 23:42:26.797437 2083 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:42:26.797832 kubelet[2083]: W0208 23:42:26.797468 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-65dd02f9dc&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.797920 kubelet[2083]: W0208 23:42:26.797906 2083 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:42:26.797997 kubelet[2083]: E0208 23:42:26.797986 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-65dd02f9dc&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.798366 kubelet[2083]: I0208 23:42:26.798348 2083 server.go:1186] "Started kubelet" Feb 8 23:42:26.799741 kubelet[2083]: E0208 23:42:26.799724 2083 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:42:26.799861 kubelet[2083]: E0208 23:42:26.799850 2083 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:42:26.800086 kubelet[2083]: E0208 23:42:26.799998 2083 event.go:276] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3ecb7cea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 798325411, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 798325411, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.20:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.20:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:42:26.800929 kubelet[2083]: I0208 23:42:26.800916 2083 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:42:26.801572 kubelet[2083]: I0208 23:42:26.801557 2083 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:42:26.806210 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:42:26.806290 kubelet[2083]: I0208 23:42:26.806011 2083 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:42:26.808107 kubelet[2083]: I0208 23:42:26.807683 2083 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:42:26.808107 kubelet[2083]: I0208 23:42:26.807984 2083 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:42:26.808374 kubelet[2083]: W0208 23:42:26.808338 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.808445 kubelet[2083]: E0208 23:42:26.808377 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.808987 kubelet[2083]: E0208 23:42:26.808852 2083 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: Get "https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-65dd02f9dc?timeout=10s": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.886911 kubelet[2083]: I0208 23:42:26.886871 2083 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:42:26.886911 kubelet[2083]: I0208 23:42:26.886904 2083 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:42:26.887226 kubelet[2083]: I0208 23:42:26.886924 2083 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:42:26.891978 kubelet[2083]: I0208 23:42:26.891944 2083 policy_none.go:49] "None policy: Start" Feb 8 23:42:26.892709 kubelet[2083]: I0208 23:42:26.892683 2083 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:42:26.892709 kubelet[2083]: I0208 23:42:26.892710 2083 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:42:26.901508 systemd[1]: Created slice kubepods.slice. Feb 8 23:42:26.906299 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:42:26.909946 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:42:26.911387 kubelet[2083]: I0208 23:42:26.911366 2083 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:26.912482 kubelet[2083]: E0208 23:42:26.912462 2083 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:26.916956 kubelet[2083]: I0208 23:42:26.916938 2083 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:42:26.917294 kubelet[2083]: I0208 23:42:26.917280 2083 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:42:26.919702 kubelet[2083]: E0208 23:42:26.919682 2083 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-65dd02f9dc\" not found" Feb 8 23:42:26.935126 kubelet[2083]: I0208 23:42:26.935109 2083 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:42:26.971036 kubelet[2083]: I0208 23:42:26.971004 2083 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:42:26.971036 kubelet[2083]: I0208 23:42:26.971033 2083 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:42:26.971382 kubelet[2083]: I0208 23:42:26.971059 2083 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:42:26.971382 kubelet[2083]: E0208 23:42:26.971111 2083 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 8 23:42:26.972466 kubelet[2083]: W0208 23:42:26.972424 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:26.972630 kubelet[2083]: E0208 23:42:26.972617 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.009928 kubelet[2083]: E0208 23:42:27.009874 2083 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: Get "https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-65dd02f9dc?timeout=10s": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.074080 kubelet[2083]: I0208 23:42:27.071995 2083 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:27.074541 kubelet[2083]: I0208 23:42:27.074518 2083 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:27.075904 kubelet[2083]: I0208 23:42:27.075886 2083 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:27.076596 kubelet[2083]: I0208 23:42:27.076479 2083 status_manager.go:698] "Failed to get status for pod" podUID=7aee2f070796f76f37b30957230af032 pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" err="Get \"https://10.200.8.20:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-3510.3.2-a-65dd02f9dc\": dial tcp 10.200.8.20:6443: connect: connection refused" Feb 8 23:42:27.078772 kubelet[2083]: I0208 23:42:27.078751 2083 status_manager.go:698] "Failed to get status for pod" podUID=93ace84ed38dbfadf2efcba8864c3cd9 pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" err="Get \"https://10.200.8.20:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\": dial tcp 10.200.8.20:6443: connect: connection refused" Feb 8 23:42:27.081670 systemd[1]: Created slice kubepods-burstable-pod7aee2f070796f76f37b30957230af032.slice. Feb 8 23:42:27.083452 kubelet[2083]: I0208 23:42:27.083293 2083 status_manager.go:698] "Failed to get status for pod" podUID=9f25e95d709ad36a8d366e124c5caa01 pod="kube-system/kube-scheduler-ci-3510.3.2-a-65dd02f9dc" err="Get \"https://10.200.8.20:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-3510.3.2-a-65dd02f9dc\": dial tcp 10.200.8.20:6443: connect: connection refused" Feb 8 23:42:27.092713 systemd[1]: Created slice kubepods-burstable-pod93ace84ed38dbfadf2efcba8864c3cd9.slice. Feb 8 23:42:27.101417 systemd[1]: Created slice kubepods-burstable-pod9f25e95d709ad36a8d366e124c5caa01.slice. Feb 8 23:42:27.108537 kubelet[2083]: I0208 23:42:27.108514 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f25e95d709ad36a8d366e124c5caa01-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-65dd02f9dc\" (UID: \"9f25e95d709ad36a8d366e124c5caa01\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108655 kubelet[2083]: I0208 23:42:27.108552 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108655 kubelet[2083]: I0208 23:42:27.108581 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108655 kubelet[2083]: I0208 23:42:27.108612 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108655 kubelet[2083]: I0208 23:42:27.108640 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108825 kubelet[2083]: I0208 23:42:27.108671 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108825 kubelet[2083]: I0208 23:42:27.108705 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108825 kubelet[2083]: I0208 23:42:27.108735 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.108825 kubelet[2083]: I0208 23:42:27.108769 2083 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.114660 kubelet[2083]: I0208 23:42:27.114632 2083 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.114928 kubelet[2083]: E0208 23:42:27.114906 2083 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.390771 env[1338]: time="2024-02-08T23:42:27.390346260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-65dd02f9dc,Uid:7aee2f070796f76f37b30957230af032,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:27.396101 env[1338]: time="2024-02-08T23:42:27.395980713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-65dd02f9dc,Uid:93ace84ed38dbfadf2efcba8864c3cd9,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:27.405073 env[1338]: time="2024-02-08T23:42:27.405033199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-65dd02f9dc,Uid:9f25e95d709ad36a8d366e124c5caa01,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:27.410797 kubelet[2083]: E0208 23:42:27.410759 2083 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: Get "https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-65dd02f9dc?timeout=10s": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.517347 kubelet[2083]: I0208 23:42:27.517303 2083 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.517960 kubelet[2083]: E0208 23:42:27.517933 2083 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:27.798696 kubelet[2083]: W0208 23:42:27.798634 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.798696 kubelet[2083]: E0208 23:42:27.798694 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.20:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.878513 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014608269.mount: Deactivated successfully. Feb 8 23:42:27.903152 env[1338]: time="2024-02-08T23:42:27.903093717Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.906359 env[1338]: time="2024-02-08T23:42:27.906314948Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.917362 env[1338]: time="2024-02-08T23:42:27.917312652Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.920260 env[1338]: time="2024-02-08T23:42:27.920219779Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.923433 env[1338]: time="2024-02-08T23:42:27.923397109Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.925792 env[1338]: time="2024-02-08T23:42:27.925759032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.932557 env[1338]: time="2024-02-08T23:42:27.932520996Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.936629 env[1338]: time="2024-02-08T23:42:27.936594934Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.939279 env[1338]: time="2024-02-08T23:42:27.939247260Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.942677 env[1338]: time="2024-02-08T23:42:27.942642192Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.945238 env[1338]: time="2024-02-08T23:42:27.945207816Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.956081 env[1338]: time="2024-02-08T23:42:27.956036519Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:27.966576 kubelet[2083]: W0208 23:42:27.966500 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-65dd02f9dc&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:27.966712 kubelet[2083]: E0208 23:42:27.966585 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.20:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-65dd02f9dc&limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.024545 env[1338]: time="2024-02-08T23:42:28.023773956Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:28.024545 env[1338]: time="2024-02-08T23:42:28.023823156Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:28.024545 env[1338]: time="2024-02-08T23:42:28.023837956Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:28.024545 env[1338]: time="2024-02-08T23:42:28.023978358Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cc286e4adb46240ae8878d7e8dcd4fd82112fc116a086aec518e9ef616b92860 pid=2159 runtime=io.containerd.runc.v2 Feb 8 23:42:28.043968 env[1338]: time="2024-02-08T23:42:28.043880842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:28.044184 env[1338]: time="2024-02-08T23:42:28.043954043Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:28.044184 env[1338]: time="2024-02-08T23:42:28.043969343Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:28.044184 env[1338]: time="2024-02-08T23:42:28.044113344Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5 pid=2191 runtime=io.containerd.runc.v2 Feb 8 23:42:28.044845 env[1338]: time="2024-02-08T23:42:28.044784051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:28.045003 env[1338]: time="2024-02-08T23:42:28.044975852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:28.050749 systemd[1]: Started cri-containerd-cc286e4adb46240ae8878d7e8dcd4fd82112fc116a086aec518e9ef616b92860.scope. Feb 8 23:42:28.053733 env[1338]: time="2024-02-08T23:42:28.053661933Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:28.054189 env[1338]: time="2024-02-08T23:42:28.054132637Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2 pid=2192 runtime=io.containerd.runc.v2 Feb 8 23:42:28.082240 systemd[1]: Started cri-containerd-cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2.scope. Feb 8 23:42:28.102971 systemd[1]: Started cri-containerd-2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5.scope. Feb 8 23:42:28.145488 env[1338]: time="2024-02-08T23:42:28.145431885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-65dd02f9dc,Uid:7aee2f070796f76f37b30957230af032,Namespace:kube-system,Attempt:0,} returns sandbox id \"cc286e4adb46240ae8878d7e8dcd4fd82112fc116a086aec518e9ef616b92860\"" Feb 8 23:42:28.149745 env[1338]: time="2024-02-08T23:42:28.149687324Z" level=info msg="CreateContainer within sandbox \"cc286e4adb46240ae8878d7e8dcd4fd82112fc116a086aec518e9ef616b92860\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:42:28.171587 env[1338]: time="2024-02-08T23:42:28.171535727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-65dd02f9dc,Uid:93ace84ed38dbfadf2efcba8864c3cd9,Namespace:kube-system,Attempt:0,} returns sandbox id \"cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2\"" Feb 8 23:42:28.175032 env[1338]: time="2024-02-08T23:42:28.174989259Z" level=info msg="CreateContainer within sandbox \"cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:42:28.194384 env[1338]: time="2024-02-08T23:42:28.194332238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-65dd02f9dc,Uid:9f25e95d709ad36a8d366e124c5caa01,Namespace:kube-system,Attempt:0,} returns sandbox id \"2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5\"" Feb 8 23:42:28.197042 env[1338]: time="2024-02-08T23:42:28.196994463Z" level=info msg="CreateContainer within sandbox \"2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:42:28.201194 env[1338]: time="2024-02-08T23:42:28.201132802Z" level=info msg="CreateContainer within sandbox \"cc286e4adb46240ae8878d7e8dcd4fd82112fc116a086aec518e9ef616b92860\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23b7776490844b39fea452ecf033f0f533721b67e66322dbc07cb1a9917067b2\"" Feb 8 23:42:28.201939 env[1338]: time="2024-02-08T23:42:28.201905509Z" level=info msg="StartContainer for \"23b7776490844b39fea452ecf033f0f533721b67e66322dbc07cb1a9917067b2\"" Feb 8 23:42:28.203285 kubelet[2083]: W0208 23:42:28.203257 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.203744 kubelet[2083]: E0208 23:42:28.203293 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.20:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.212041 kubelet[2083]: E0208 23:42:28.211998 2083 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: Get "https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-65dd02f9dc?timeout=10s": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.224167 systemd[1]: Started cri-containerd-23b7776490844b39fea452ecf033f0f533721b67e66322dbc07cb1a9917067b2.scope. Feb 8 23:42:28.241909 env[1338]: time="2024-02-08T23:42:28.241852279Z" level=info msg="CreateContainer within sandbox \"cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33\"" Feb 8 23:42:28.244750 env[1338]: time="2024-02-08T23:42:28.244708706Z" level=info msg="StartContainer for \"34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33\"" Feb 8 23:42:28.257452 env[1338]: time="2024-02-08T23:42:28.257400924Z" level=info msg="CreateContainer within sandbox \"2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50\"" Feb 8 23:42:28.258054 env[1338]: time="2024-02-08T23:42:28.258012629Z" level=info msg="StartContainer for \"530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50\"" Feb 8 23:42:28.280561 systemd[1]: Started cri-containerd-34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33.scope. Feb 8 23:42:28.302084 env[1338]: time="2024-02-08T23:42:28.301955337Z" level=info msg="StartContainer for \"23b7776490844b39fea452ecf033f0f533721b67e66322dbc07cb1a9917067b2\" returns successfully" Feb 8 23:42:28.315230 kubelet[2083]: W0208 23:42:28.314960 2083 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.315230 kubelet[2083]: E0208 23:42:28.315061 2083 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.20:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.20:6443: connect: connection refused Feb 8 23:42:28.319711 systemd[1]: Started cri-containerd-530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50.scope. Feb 8 23:42:28.323861 kubelet[2083]: I0208 23:42:28.323477 2083 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:28.323861 kubelet[2083]: E0208 23:42:28.323839 2083 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.20:6443/api/v1/nodes\": dial tcp 10.200.8.20:6443: connect: connection refused" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:28.364068 env[1338]: time="2024-02-08T23:42:28.364011813Z" level=info msg="StartContainer for \"34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33\" returns successfully" Feb 8 23:42:28.487082 env[1338]: time="2024-02-08T23:42:28.487024355Z" level=info msg="StartContainer for \"530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50\" returns successfully" Feb 8 23:42:29.926698 kubelet[2083]: I0208 23:42:29.926671 2083 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:30.882759 kubelet[2083]: I0208 23:42:30.882708 2083 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:30.940223 kubelet[2083]: E0208 23:42:30.940075 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3ecb7cea3", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 798325411, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 798325411, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:30.994541 kubelet[2083]: E0208 23:42:30.994490 2083 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:30.998857 kubelet[2083]: E0208 23:42:30.998735 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3eccede99", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 799836825, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 799836825, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.053054 kubelet[2083]: E0208 23:42:31.052930 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f35850", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886113360, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886113360, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.106828 kubelet[2083]: E0208 23:42:31.106721 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f38154", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886123860, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886123860, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.161020 kubelet[2083]: E0208 23:42:31.160803 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f39b80", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886130560, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886130560, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.217641 kubelet[2083]: E0208 23:42:31.217523 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f35850", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886113360, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 910857399, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.273896 kubelet[2083]: E0208 23:42:31.273776 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f38154", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886123860, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 910863599, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.329966 kubelet[2083]: E0208 23:42:31.329846 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f39b80", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886130560, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 911225402, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.384661 kubelet[2083]: E0208 23:42:31.384532 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f404c4df", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 920809695, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 920809695, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.600677 kubelet[2083]: E0208 23:42:31.600562 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f35850", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886113360, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 27, 74414467, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:31.798285 kubelet[2083]: I0208 23:42:31.798237 2083 apiserver.go:52] "Watching apiserver" Feb 8 23:42:31.809001 kubelet[2083]: I0208 23:42:31.808962 2083 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:42:31.839487 kubelet[2083]: I0208 23:42:31.839449 2083 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:42:31.995381 kubelet[2083]: E0208 23:42:31.995206 2083 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-65dd02f9dc.17b207c3f1f38154", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-65dd02f9dc", UID:"ci-3510.3.2-a-65dd02f9dc", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node ci-3510.3.2-a-65dd02f9dc status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-65dd02f9dc"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 42, 26, 886123860, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 42, 27, 74421567, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'namespaces "default" not found' (will not retry!) Feb 8 23:42:34.652774 systemd[1]: Reloading. Feb 8 23:42:34.737316 /usr/lib/systemd/system-generators/torcx-generator[2403]: time="2024-02-08T23:42:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:42:34.737770 /usr/lib/systemd/system-generators/torcx-generator[2403]: time="2024-02-08T23:42:34Z" level=info msg="torcx already run" Feb 8 23:42:34.833288 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:42:34.833308 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:42:34.850885 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:42:34.963888 kubelet[2083]: I0208 23:42:34.963671 2083 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:42:34.963855 systemd[1]: Stopping kubelet.service... Feb 8 23:42:34.977537 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:42:34.977763 systemd[1]: Stopped kubelet.service. Feb 8 23:42:34.979718 systemd[1]: Started kubelet.service. Feb 8 23:42:35.064337 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:42:35.064337 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:42:35.064851 kubelet[2466]: I0208 23:42:35.064393 2466 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:42:35.065767 kubelet[2466]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 8 23:42:35.065767 kubelet[2466]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:42:35.069090 kubelet[2466]: I0208 23:42:35.069068 2466 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 8 23:42:35.069463 kubelet[2466]: I0208 23:42:35.069452 2466 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:42:35.069930 kubelet[2466]: I0208 23:42:35.069915 2466 server.go:836] "Client rotation is on, will bootstrap in background" Feb 8 23:42:35.071370 kubelet[2466]: I0208 23:42:35.071347 2466 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:42:35.072350 kubelet[2466]: I0208 23:42:35.072329 2466 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:42:35.076637 kubelet[2466]: I0208 23:42:35.076601 2466 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:42:35.076895 kubelet[2466]: I0208 23:42:35.076878 2466 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:42:35.076981 kubelet[2466]: I0208 23:42:35.076965 2466 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:42:35.077101 kubelet[2466]: I0208 23:42:35.076996 2466 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:42:35.077101 kubelet[2466]: I0208 23:42:35.077017 2466 container_manager_linux.go:308] "Creating device plugin manager" Feb 8 23:42:35.077101 kubelet[2466]: I0208 23:42:35.077061 2466 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:42:35.080078 kubelet[2466]: I0208 23:42:35.080055 2466 kubelet.go:398] "Attempting to sync node with API server" Feb 8 23:42:35.084349 kubelet[2466]: I0208 23:42:35.080701 2466 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:42:35.084349 kubelet[2466]: I0208 23:42:35.080735 2466 kubelet.go:297] "Adding apiserver pod source" Feb 8 23:42:35.087205 kubelet[2466]: I0208 23:42:35.087184 2466 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:42:35.096221 kubelet[2466]: I0208 23:42:35.092647 2466 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:42:35.096221 kubelet[2466]: I0208 23:42:35.093079 2466 server.go:1186] "Started kubelet" Feb 8 23:42:35.099197 kubelet[2466]: I0208 23:42:35.097767 2466 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:42:35.099197 kubelet[2466]: I0208 23:42:35.098982 2466 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:42:35.099809 kubelet[2466]: I0208 23:42:35.099783 2466 server.go:451] "Adding debug handlers to kubelet server" Feb 8 23:42:35.108942 kubelet[2466]: I0208 23:42:35.108877 2466 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 8 23:42:35.111381 kubelet[2466]: I0208 23:42:35.110375 2466 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:42:35.114980 kubelet[2466]: E0208 23:42:35.114961 2466 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:42:35.115162 kubelet[2466]: E0208 23:42:35.115139 2466 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:42:35.137331 kubelet[2466]: I0208 23:42:35.137316 2466 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:42:35.162861 kubelet[2466]: I0208 23:42:35.162834 2466 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:42:35.162861 kubelet[2466]: I0208 23:42:35.162860 2466 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 8 23:42:35.163071 kubelet[2466]: I0208 23:42:35.162881 2466 kubelet.go:2113] "Starting kubelet main sync loop" Feb 8 23:42:35.163071 kubelet[2466]: E0208 23:42:35.162927 2466 kubelet.go:2137] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:42:35.190324 kubelet[2466]: I0208 23:42:35.190292 2466 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:42:35.190324 kubelet[2466]: I0208 23:42:35.190312 2466 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:42:35.190324 kubelet[2466]: I0208 23:42:35.190332 2466 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:42:35.190582 kubelet[2466]: I0208 23:42:35.190496 2466 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:42:35.190582 kubelet[2466]: I0208 23:42:35.190511 2466 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:42:35.190582 kubelet[2466]: I0208 23:42:35.190522 2466 policy_none.go:49] "None policy: Start" Feb 8 23:42:35.191141 kubelet[2466]: I0208 23:42:35.191119 2466 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:42:35.191141 kubelet[2466]: I0208 23:42:35.191143 2466 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:42:35.191313 kubelet[2466]: I0208 23:42:35.191300 2466 state_mem.go:75] "Updated machine memory state" Feb 8 23:42:35.194831 kubelet[2466]: I0208 23:42:35.194816 2466 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:42:35.197053 kubelet[2466]: I0208 23:42:35.197038 2466 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:42:35.211914 kubelet[2466]: I0208 23:42:35.211888 2466 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.224061 kubelet[2466]: I0208 23:42:35.221980 2466 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.224061 kubelet[2466]: I0208 23:42:35.222057 2466 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.263999 kubelet[2466]: I0208 23:42:35.263953 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:35.264244 kubelet[2466]: I0208 23:42:35.264068 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:35.264244 kubelet[2466]: I0208 23:42:35.264111 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:35.275594 kubelet[2466]: E0208 23:42:35.275558 2466 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.287529 kubelet[2466]: E0208 23:42:35.287496 2466 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311120 kubelet[2466]: I0208 23:42:35.311075 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311418 kubelet[2466]: I0208 23:42:35.311402 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311567 kubelet[2466]: I0208 23:42:35.311553 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311704 kubelet[2466]: I0208 23:42:35.311691 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311831 kubelet[2466]: I0208 23:42:35.311819 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.311946 kubelet[2466]: I0208 23:42:35.311935 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.312086 kubelet[2466]: I0208 23:42:35.312075 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7aee2f070796f76f37b30957230af032-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" (UID: \"7aee2f070796f76f37b30957230af032\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.312233 kubelet[2466]: I0208 23:42:35.312219 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/93ace84ed38dbfadf2efcba8864c3cd9-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" (UID: \"93ace84ed38dbfadf2efcba8864c3cd9\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:35.312372 kubelet[2466]: I0208 23:42:35.312358 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/9f25e95d709ad36a8d366e124c5caa01-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-65dd02f9dc\" (UID: \"9f25e95d709ad36a8d366e124c5caa01\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:36.087779 kubelet[2466]: I0208 23:42:36.087732 2466 apiserver.go:52] "Watching apiserver" Feb 8 23:42:36.111439 kubelet[2466]: I0208 23:42:36.111398 2466 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:42:36.120831 kubelet[2466]: I0208 23:42:36.120791 2466 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:42:36.318586 sudo[1676]: pam_unix(sudo:session): session closed for user root Feb 8 23:42:36.440455 sshd[1673]: pam_unix(sshd:session): session closed for user core Feb 8 23:42:36.443624 systemd[1]: sshd@4-10.200.8.20:22-10.200.12.6:38496.service: Deactivated successfully. Feb 8 23:42:36.444717 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:42:36.444934 systemd[1]: session-7.scope: Consumed 3.165s CPU time. Feb 8 23:42:36.445549 systemd-logind[1326]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:42:36.446474 systemd-logind[1326]: Removed session 7. Feb 8 23:42:36.487688 kubelet[2466]: E0208 23:42:36.487657 2466 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-scheduler-ci-3510.3.2-a-65dd02f9dc\" already exists" pod="kube-system/kube-scheduler-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:36.688263 kubelet[2466]: E0208 23:42:36.688224 2466 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-65dd02f9dc\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:36.889316 kubelet[2466]: E0208 23:42:36.889280 2466 kubelet.go:1802] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-65dd02f9dc\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" Feb 8 23:42:37.100781 kubelet[2466]: I0208 23:42:37.100746 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-65dd02f9dc" podStartSLOduration=2.100699599 pod.CreationTimestamp="2024-02-08 23:42:35 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:37.099798592 +0000 UTC m=+2.114104631" watchObservedRunningTime="2024-02-08 23:42:37.100699599 +0000 UTC m=+2.115005738" Feb 8 23:42:37.487543 kubelet[2466]: I0208 23:42:37.487498 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-65dd02f9dc" podStartSLOduration=5.487433415 pod.CreationTimestamp="2024-02-08 23:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:37.487127113 +0000 UTC m=+2.501433152" watchObservedRunningTime="2024-02-08 23:42:37.487433415 +0000 UTC m=+2.501739454" Feb 8 23:42:40.295614 kubelet[2466]: I0208 23:42:40.295568 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-65dd02f9dc" podStartSLOduration=8.295534976999999 pod.CreationTimestamp="2024-02-08 23:42:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:37.886972131 +0000 UTC m=+2.901278270" watchObservedRunningTime="2024-02-08 23:42:40.295534977 +0000 UTC m=+5.309841016" Feb 8 23:42:46.353900 kubelet[2466]: I0208 23:42:46.353857 2466 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:42:46.354564 env[1338]: time="2024-02-08T23:42:46.354525373Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:42:46.354951 kubelet[2466]: I0208 23:42:46.354747 2466 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:42:47.135136 kubelet[2466]: I0208 23:42:47.135080 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:47.140579 kubelet[2466]: W0208 23:42:47.140549 2466 reflector.go:424] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-65dd02f9dc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-65dd02f9dc' and this object Feb 8 23:42:47.140774 kubelet[2466]: E0208 23:42:47.140759 2466 reflector.go:140] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-65dd02f9dc" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-65dd02f9dc' and this object Feb 8 23:42:47.141903 systemd[1]: Created slice kubepods-besteffort-pod51812593_cfdf_45cc_942d_52eb31aab670.slice. Feb 8 23:42:47.161076 kubelet[2466]: I0208 23:42:47.161036 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:47.168343 systemd[1]: Created slice kubepods-burstable-pod370cfd22_b79f_4ac3_a2bb_96bae6d0c01e.slice. Feb 8 23:42:47.193317 kubelet[2466]: I0208 23:42:47.193291 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krw9g\" (UniqueName: \"kubernetes.io/projected/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-kube-api-access-krw9g\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193335 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/51812593-cfdf-45cc-942d-52eb31aab670-xtables-lock\") pod \"kube-proxy-vktb5\" (UID: \"51812593-cfdf-45cc-942d-52eb31aab670\") " pod="kube-system/kube-proxy-vktb5" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193363 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/51812593-cfdf-45cc-942d-52eb31aab670-lib-modules\") pod \"kube-proxy-vktb5\" (UID: \"51812593-cfdf-45cc-942d-52eb31aab670\") " pod="kube-system/kube-proxy-vktb5" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193391 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-run\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193418 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-cni-plugin\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193444 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-cni\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.193483 kubelet[2466]: I0208 23:42:47.193471 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/51812593-cfdf-45cc-942d-52eb31aab670-kube-proxy\") pod \"kube-proxy-vktb5\" (UID: \"51812593-cfdf-45cc-942d-52eb31aab670\") " pod="kube-system/kube-proxy-vktb5" Feb 8 23:42:47.193721 kubelet[2466]: I0208 23:42:47.193502 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4cw42\" (UniqueName: \"kubernetes.io/projected/51812593-cfdf-45cc-942d-52eb31aab670-kube-api-access-4cw42\") pod \"kube-proxy-vktb5\" (UID: \"51812593-cfdf-45cc-942d-52eb31aab670\") " pod="kube-system/kube-proxy-vktb5" Feb 8 23:42:47.193721 kubelet[2466]: I0208 23:42:47.193533 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-flannel-cfg\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.193721 kubelet[2466]: I0208 23:42:47.193566 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/370cfd22-b79f-4ac3-a2bb-96bae6d0c01e-xtables-lock\") pod \"kube-flannel-ds-ddw26\" (UID: \"370cfd22-b79f-4ac3-a2bb-96bae6d0c01e\") " pod="kube-flannel/kube-flannel-ds-ddw26" Feb 8 23:42:47.473328 env[1338]: time="2024-02-08T23:42:47.472649797Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ddw26,Uid:370cfd22-b79f-4ac3-a2bb-96bae6d0c01e,Namespace:kube-flannel,Attempt:0,}" Feb 8 23:42:47.503695 env[1338]: time="2024-02-08T23:42:47.503622701Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:47.503889 env[1338]: time="2024-02-08T23:42:47.503674301Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:47.503889 env[1338]: time="2024-02-08T23:42:47.503688901Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:47.504052 env[1338]: time="2024-02-08T23:42:47.503925503Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab pid=2553 runtime=io.containerd.runc.v2 Feb 8 23:42:47.519241 systemd[1]: Started cri-containerd-c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab.scope. Feb 8 23:42:47.566833 env[1338]: time="2024-02-08T23:42:47.566789216Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-ddw26,Uid:370cfd22-b79f-4ac3-a2bb-96bae6d0c01e,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\"" Feb 8 23:42:47.569589 env[1338]: time="2024-02-08T23:42:47.568585028Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\"" Feb 8 23:42:48.295739 kubelet[2466]: E0208 23:42:48.295681 2466 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:42:48.296539 kubelet[2466]: E0208 23:42:48.295842 2466 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/51812593-cfdf-45cc-942d-52eb31aab670-kube-proxy podName:51812593-cfdf-45cc-942d-52eb31aab670 nodeName:}" failed. No retries permitted until 2024-02-08 23:42:48.795802982 +0000 UTC m=+13.810109021 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/51812593-cfdf-45cc-942d-52eb31aab670-kube-proxy") pod "kube-proxy-vktb5" (UID: "51812593-cfdf-45cc-942d-52eb31aab670") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:42:48.953091 env[1338]: time="2024-02-08T23:42:48.953045039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vktb5,Uid:51812593-cfdf-45cc-942d-52eb31aab670,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:48.985039 env[1338]: time="2024-02-08T23:42:48.984966745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:42:48.985284 env[1338]: time="2024-02-08T23:42:48.985005046Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:42:48.985284 env[1338]: time="2024-02-08T23:42:48.985018946Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:42:48.985284 env[1338]: time="2024-02-08T23:42:48.985209847Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa pid=2592 runtime=io.containerd.runc.v2 Feb 8 23:42:49.008461 systemd[1]: run-containerd-runc-k8s.io-16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa-runc.jOOcAL.mount: Deactivated successfully. Feb 8 23:42:49.013809 systemd[1]: Started cri-containerd-16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa.scope. Feb 8 23:42:49.046099 env[1338]: time="2024-02-08T23:42:49.046050537Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-vktb5,Uid:51812593-cfdf-45cc-942d-52eb31aab670,Namespace:kube-system,Attempt:0,} returns sandbox id \"16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa\"" Feb 8 23:42:49.049674 env[1338]: time="2024-02-08T23:42:49.049624059Z" level=info msg="CreateContainer within sandbox \"16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:42:49.085951 env[1338]: time="2024-02-08T23:42:49.085841190Z" level=info msg="CreateContainer within sandbox \"16a9d38f73274aa49525502dd7aff9adfc777dd811759524575e04b7456128fa\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"39bc04e8fdc854fc4b4164661b14aedfa92ddc99eb5020ce0da1faa9f4d183aa\"" Feb 8 23:42:49.088531 env[1338]: time="2024-02-08T23:42:49.086890797Z" level=info msg="StartContainer for \"39bc04e8fdc854fc4b4164661b14aedfa92ddc99eb5020ce0da1faa9f4d183aa\"" Feb 8 23:42:49.105816 systemd[1]: Started cri-containerd-39bc04e8fdc854fc4b4164661b14aedfa92ddc99eb5020ce0da1faa9f4d183aa.scope. Feb 8 23:42:49.144508 env[1338]: time="2024-02-08T23:42:49.144451964Z" level=info msg="StartContainer for \"39bc04e8fdc854fc4b4164661b14aedfa92ddc99eb5020ce0da1faa9f4d183aa\" returns successfully" Feb 8 23:42:49.576335 env[1338]: time="2024-02-08T23:42:49.576279718Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:49.583201 env[1338]: time="2024-02-08T23:42:49.583144762Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:49.586693 env[1338]: time="2024-02-08T23:42:49.586650584Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:49.589681 env[1338]: time="2024-02-08T23:42:49.589642803Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin@sha256:28d3a6be9f450282bf42e4dad143d41da23e3d91f66f19c01ee7fd21fd17cb2b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:49.589980 env[1338]: time="2024-02-08T23:42:49.589944905Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel-cni-plugin:v1.1.0\" returns image reference \"sha256:fcecffc7ad4af70c8b436d45688771e0562cbd20f55d98581ba22cf13aad360d\"" Feb 8 23:42:49.592528 env[1338]: time="2024-02-08T23:42:49.592481322Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Feb 8 23:42:49.619886 env[1338]: time="2024-02-08T23:42:49.619835696Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5\"" Feb 8 23:42:49.620772 env[1338]: time="2024-02-08T23:42:49.620741102Z" level=info msg="StartContainer for \"949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5\"" Feb 8 23:42:49.638103 systemd[1]: Started cri-containerd-949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5.scope. Feb 8 23:42:49.670423 systemd[1]: cri-containerd-949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5.scope: Deactivated successfully. Feb 8 23:42:49.673157 env[1338]: time="2024-02-08T23:42:49.673115836Z" level=info msg="StartContainer for \"949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5\" returns successfully" Feb 8 23:42:49.801280 env[1338]: time="2024-02-08T23:42:49.801217153Z" level=info msg="shim disconnected" id=949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5 Feb 8 23:42:49.801280 env[1338]: time="2024-02-08T23:42:49.801277253Z" level=warning msg="cleaning up after shim disconnected" id=949726fa30ab9fdba2441a6d1bd7d86cc8780df0bf212515aa669f208ee4a5d5 namespace=k8s.io Feb 8 23:42:49.801280 env[1338]: time="2024-02-08T23:42:49.801289353Z" level=info msg="cleaning up dead shim" Feb 8 23:42:49.810445 env[1338]: time="2024-02-08T23:42:49.810379411Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:49Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2833 runtime=io.containerd.runc.v2\n" Feb 8 23:42:50.204810 env[1338]: time="2024-02-08T23:42:50.202737994Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\"" Feb 8 23:42:50.213730 kubelet[2466]: I0208 23:42:50.213704 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-vktb5" podStartSLOduration=3.213626663 pod.CreationTimestamp="2024-02-08 23:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:49.214746313 +0000 UTC m=+14.229052452" watchObservedRunningTime="2024-02-08 23:42:50.213626663 +0000 UTC m=+15.227932802" Feb 8 23:42:52.135773 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount617536864.mount: Deactivated successfully. Feb 8 23:42:52.984457 env[1338]: time="2024-02-08T23:42:52.984399097Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:52.990291 env[1338]: time="2024-02-08T23:42:52.990240233Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:52.995190 env[1338]: time="2024-02-08T23:42:52.995142363Z" level=info msg="ImageUpdate event &ImageUpdate{Name:docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:52.999511 env[1338]: time="2024-02-08T23:42:52.999474989Z" level=info msg="ImageCreate event &ImageCreate{Name:docker.io/rancher/mirrored-flannelcni-flannel@sha256:ec0f0b7430c8370c9f33fe76eb0392c1ad2ddf4ccaf2b9f43995cca6c94d3832,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:42:53.000689 env[1338]: time="2024-02-08T23:42:53.000656196Z" level=info msg="PullImage \"docker.io/rancher/mirrored-flannelcni-flannel:v0.20.2\" returns image reference \"sha256:b5c6c9203f83e9a48e9d0b0fb7a38196c8412f458953ca98a4feac3515c6abb1\"" Feb 8 23:42:53.006806 env[1338]: time="2024-02-08T23:42:53.006766533Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Feb 8 23:42:53.030136 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2848113086.mount: Deactivated successfully. Feb 8 23:42:53.042706 env[1338]: time="2024-02-08T23:42:53.042663049Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5\"" Feb 8 23:42:53.044947 env[1338]: time="2024-02-08T23:42:53.043428954Z" level=info msg="StartContainer for \"aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5\"" Feb 8 23:42:53.062169 systemd[1]: Started cri-containerd-aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5.scope. Feb 8 23:42:53.090684 systemd[1]: cri-containerd-aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5.scope: Deactivated successfully. Feb 8 23:42:53.096440 env[1338]: time="2024-02-08T23:42:53.096354772Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod370cfd22_b79f_4ac3_a2bb_96bae6d0c01e.slice/cri-containerd-aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5.scope/memory.events\": no such file or directory" Feb 8 23:42:53.097956 env[1338]: time="2024-02-08T23:42:53.097916781Z" level=info msg="StartContainer for \"aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5\" returns successfully" Feb 8 23:42:53.195687 kubelet[2466]: I0208 23:42:53.195653 2466 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:42:53.226584 kubelet[2466]: I0208 23:42:53.226543 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:53.231252 kubelet[2466]: I0208 23:42:53.231224 2466 topology_manager.go:210] "Topology Admit Handler" Feb 8 23:42:53.234083 systemd[1]: Created slice kubepods-burstable-podca2ab598_9ad3_491c_a55f_7bff4fb306c3.slice. Feb 8 23:42:53.245400 systemd[1]: Created slice kubepods-burstable-pod5bf3bae8_85f8_4837_bcec_7dbbe44ec3e2.slice. Feb 8 23:42:53.339025 kubelet[2466]: I0208 23:42:53.338977 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ca2ab598-9ad3-491c-a55f-7bff4fb306c3-config-volume\") pod \"coredns-787d4945fb-mf4lk\" (UID: \"ca2ab598-9ad3-491c-a55f-7bff4fb306c3\") " pod="kube-system/coredns-787d4945fb-mf4lk" Feb 8 23:42:53.339284 kubelet[2466]: I0208 23:42:53.339068 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlgb7\" (UniqueName: \"kubernetes.io/projected/ca2ab598-9ad3-491c-a55f-7bff4fb306c3-kube-api-access-zlgb7\") pod \"coredns-787d4945fb-mf4lk\" (UID: \"ca2ab598-9ad3-491c-a55f-7bff4fb306c3\") " pod="kube-system/coredns-787d4945fb-mf4lk" Feb 8 23:42:53.339284 kubelet[2466]: I0208 23:42:53.339134 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2-config-volume\") pod \"coredns-787d4945fb-l7pdz\" (UID: \"5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2\") " pod="kube-system/coredns-787d4945fb-l7pdz" Feb 8 23:42:53.339284 kubelet[2466]: I0208 23:42:53.339169 2466 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v79zt\" (UniqueName: \"kubernetes.io/projected/5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2-kube-api-access-v79zt\") pod \"coredns-787d4945fb-l7pdz\" (UID: \"5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2\") " pod="kube-system/coredns-787d4945fb-l7pdz" Feb 8 23:42:53.543670 env[1338]: time="2024-02-08T23:42:53.543523160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mf4lk,Uid:ca2ab598-9ad3-491c-a55f-7bff4fb306c3,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:53.552104 env[1338]: time="2024-02-08T23:42:53.552060612Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-l7pdz,Uid:5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2,Namespace:kube-system,Attempt:0,}" Feb 8 23:42:53.587493 env[1338]: time="2024-02-08T23:42:53.587421924Z" level=info msg="shim disconnected" id=aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5 Feb 8 23:42:53.587493 env[1338]: time="2024-02-08T23:42:53.587477825Z" level=warning msg="cleaning up after shim disconnected" id=aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5 namespace=k8s.io Feb 8 23:42:53.587493 env[1338]: time="2024-02-08T23:42:53.587489625Z" level=info msg="cleaning up dead shim" Feb 8 23:42:53.596083 env[1338]: time="2024-02-08T23:42:53.596039476Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:42:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2892 runtime=io.containerd.runc.v2\n" Feb 8 23:42:53.664183 env[1338]: time="2024-02-08T23:42:53.664078485Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mf4lk,Uid:ca2ab598-9ad3-491c-a55f-7bff4fb306c3,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"93d1ca39dfd6bcd5f96e2c59aff4905ddb01712e478218f3512979791875beb2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 8 23:42:53.664553 kubelet[2466]: E0208 23:42:53.664464 2466 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d1ca39dfd6bcd5f96e2c59aff4905ddb01712e478218f3512979791875beb2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 8 23:42:53.664687 kubelet[2466]: E0208 23:42:53.664565 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d1ca39dfd6bcd5f96e2c59aff4905ddb01712e478218f3512979791875beb2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-mf4lk" Feb 8 23:42:53.664687 kubelet[2466]: E0208 23:42:53.664598 2466 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93d1ca39dfd6bcd5f96e2c59aff4905ddb01712e478218f3512979791875beb2\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-mf4lk" Feb 8 23:42:53.664687 kubelet[2466]: E0208 23:42:53.664668 2466 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-mf4lk_kube-system(ca2ab598-9ad3-491c-a55f-7bff4fb306c3)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-mf4lk_kube-system(ca2ab598-9ad3-491c-a55f-7bff4fb306c3)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93d1ca39dfd6bcd5f96e2c59aff4905ddb01712e478218f3512979791875beb2\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-mf4lk" podUID=ca2ab598-9ad3-491c-a55f-7bff4fb306c3 Feb 8 23:42:53.669190 env[1338]: time="2024-02-08T23:42:53.669127316Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-l7pdz,Uid:5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7140601aedc9b70e532b1c55dd3c4fe6fcc7f8559d2eaea5185483f637563f8c\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 8 23:42:53.669429 kubelet[2466]: E0208 23:42:53.669410 2466 remote_runtime.go:176] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140601aedc9b70e532b1c55dd3c4fe6fcc7f8559d2eaea5185483f637563f8c\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" Feb 8 23:42:53.669518 kubelet[2466]: E0208 23:42:53.669460 2466 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140601aedc9b70e532b1c55dd3c4fe6fcc7f8559d2eaea5185483f637563f8c\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-l7pdz" Feb 8 23:42:53.669518 kubelet[2466]: E0208 23:42:53.669490 2466 kuberuntime_manager.go:782] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7140601aedc9b70e532b1c55dd3c4fe6fcc7f8559d2eaea5185483f637563f8c\": plugin type=\"flannel\" failed (add): open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-787d4945fb-l7pdz" Feb 8 23:42:53.669610 kubelet[2466]: E0208 23:42:53.669554 2466 pod_workers.go:965] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-787d4945fb-l7pdz_kube-system(5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-787d4945fb-l7pdz_kube-system(5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7140601aedc9b70e532b1c55dd3c4fe6fcc7f8559d2eaea5185483f637563f8c\\\": plugin type=\\\"flannel\\\" failed (add): open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-787d4945fb-l7pdz" podUID=5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2 Feb 8 23:42:54.031087 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aa0817f1431f4ad33c22d80dc023d0de10fbf0515d0b5537869bdced97b4ffe5-rootfs.mount: Deactivated successfully. Feb 8 23:42:54.222576 env[1338]: time="2024-02-08T23:42:54.222513625Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Feb 8 23:42:54.251839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount529211366.mount: Deactivated successfully. Feb 8 23:42:54.263197 env[1338]: time="2024-02-08T23:42:54.263131666Z" level=info msg="CreateContainer within sandbox \"c95ce8fa5d56ea9fc207570a69bbe354267037b80b2bb091edbcf8bed252f1ab\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"16c01f81d7e155cdfc6e38b4c31e48feb698b43e1521d03c5826de71385bf373\"" Feb 8 23:42:54.265107 env[1338]: time="2024-02-08T23:42:54.263665769Z" level=info msg="StartContainer for \"16c01f81d7e155cdfc6e38b4c31e48feb698b43e1521d03c5826de71385bf373\"" Feb 8 23:42:54.286690 systemd[1]: Started cri-containerd-16c01f81d7e155cdfc6e38b4c31e48feb698b43e1521d03c5826de71385bf373.scope. Feb 8 23:42:54.318365 env[1338]: time="2024-02-08T23:42:54.318312393Z" level=info msg="StartContainer for \"16c01f81d7e155cdfc6e38b4c31e48feb698b43e1521d03c5826de71385bf373\" returns successfully" Feb 8 23:42:55.710885 systemd-networkd[1493]: flannel.1: Link UP Feb 8 23:42:55.710895 systemd-networkd[1493]: flannel.1: Gained carrier Feb 8 23:42:57.067439 systemd-networkd[1493]: flannel.1: Gained IPv6LL Feb 8 23:43:07.164609 env[1338]: time="2024-02-08T23:43:07.164535413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mf4lk,Uid:ca2ab598-9ad3-491c-a55f-7bff4fb306c3,Namespace:kube-system,Attempt:0,}" Feb 8 23:43:07.165959 env[1338]: time="2024-02-08T23:43:07.165909220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-l7pdz,Uid:5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2,Namespace:kube-system,Attempt:0,}" Feb 8 23:43:07.226009 systemd-networkd[1493]: cni0: Link UP Feb 8 23:43:07.226020 systemd-networkd[1493]: cni0: Gained carrier Feb 8 23:43:07.230847 systemd-networkd[1493]: cni0: Lost carrier Feb 8 23:43:07.259702 systemd-networkd[1493]: vethc2c6f377: Link UP Feb 8 23:43:07.267562 kernel: cni0: port 1(vethc2c6f377) entered blocking state Feb 8 23:43:07.267690 kernel: cni0: port 1(vethc2c6f377) entered disabled state Feb 8 23:43:07.276929 kernel: device vethc2c6f377 entered promiscuous mode Feb 8 23:43:07.277031 kernel: cni0: port 1(vethc2c6f377) entered blocking state Feb 8 23:43:07.277054 kernel: cni0: port 1(vethc2c6f377) entered forwarding state Feb 8 23:43:07.284525 kernel: cni0: port 1(vethc2c6f377) entered disabled state Feb 8 23:43:07.285231 systemd-networkd[1493]: vethec780372: Link UP Feb 8 23:43:07.292182 kernel: cni0: port 2(vethec780372) entered blocking state Feb 8 23:43:07.292265 kernel: cni0: port 2(vethec780372) entered disabled state Feb 8 23:43:07.298074 kernel: device vethec780372 entered promiscuous mode Feb 8 23:43:07.298135 kernel: cni0: port 2(vethec780372) entered blocking state Feb 8 23:43:07.298156 kernel: cni0: port 2(vethec780372) entered forwarding state Feb 8 23:43:07.303939 kernel: cni0: port 2(vethec780372) entered disabled state Feb 8 23:43:07.314582 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethc2c6f377: link becomes ready Feb 8 23:43:07.314655 kernel: cni0: port 1(vethc2c6f377) entered blocking state Feb 8 23:43:07.314685 kernel: cni0: port 1(vethc2c6f377) entered forwarding state Feb 8 23:43:07.314616 systemd-networkd[1493]: vethc2c6f377: Gained carrier Feb 8 23:43:07.315109 systemd-networkd[1493]: cni0: Gained carrier Feb 8 23:43:07.322890 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): vethec780372: link becomes ready Feb 8 23:43:07.322954 kernel: cni0: port 2(vethec780372) entered blocking state Feb 8 23:43:07.322979 kernel: cni0: port 2(vethec780372) entered forwarding state Feb 8 23:43:07.326068 systemd-networkd[1493]: vethec780372: Gained carrier Feb 8 23:43:07.328953 env[1338]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000016928), "name":"cbr0", "type":"bridge"} Feb 8 23:43:07.329275 env[1338]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"} Feb 8 23:43:07.329275 env[1338]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc0000a08e8), "name":"cbr0", "type":"bridge"} Feb 8 23:43:07.343677 env[1338]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-02-08T23:43:07.343610016Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:43:07.343857 env[1338]: time="2024-02-08T23:43:07.343649316Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:43:07.343857 env[1338]: time="2024-02-08T23:43:07.343662716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:43:07.345020 env[1338]: time="2024-02-08T23:43:07.344964523Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/a2bb22a863e54b0acae72b0643b4f904d2fcc7a96902b22a7e3e5fd3e83e8c14 pid=3183 runtime=io.containerd.runc.v2 Feb 8 23:43:07.353857 env[1338]: time="2024-02-08T23:43:07.353775367Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:43:07.353965 env[1338]: time="2024-02-08T23:43:07.353870268Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:43:07.353965 env[1338]: time="2024-02-08T23:43:07.353914168Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:43:07.354123 env[1338]: time="2024-02-08T23:43:07.354076669Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b4f952e70c1b1870da656276a06ae79ee70e6081643a38152869d15622229b2f pid=3201 runtime=io.containerd.runc.v2 Feb 8 23:43:07.365412 systemd[1]: Started cri-containerd-a2bb22a863e54b0acae72b0643b4f904d2fcc7a96902b22a7e3e5fd3e83e8c14.scope. Feb 8 23:43:07.382217 systemd[1]: Started cri-containerd-b4f952e70c1b1870da656276a06ae79ee70e6081643a38152869d15622229b2f.scope. Feb 8 23:43:07.435855 env[1338]: time="2024-02-08T23:43:07.435718781Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-mf4lk,Uid:ca2ab598-9ad3-491c-a55f-7bff4fb306c3,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2bb22a863e54b0acae72b0643b4f904d2fcc7a96902b22a7e3e5fd3e83e8c14\"" Feb 8 23:43:07.442881 env[1338]: time="2024-02-08T23:43:07.442827617Z" level=info msg="CreateContainer within sandbox \"a2bb22a863e54b0acae72b0643b4f904d2fcc7a96902b22a7e3e5fd3e83e8c14\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:43:07.451675 env[1338]: time="2024-02-08T23:43:07.451638061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-787d4945fb-l7pdz,Uid:5bf3bae8-85f8-4837-bcec-7dbbe44ec3e2,Namespace:kube-system,Attempt:0,} returns sandbox id \"b4f952e70c1b1870da656276a06ae79ee70e6081643a38152869d15622229b2f\"" Feb 8 23:43:07.455420 env[1338]: time="2024-02-08T23:43:07.455386680Z" level=info msg="CreateContainer within sandbox \"b4f952e70c1b1870da656276a06ae79ee70e6081643a38152869d15622229b2f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:43:07.483464 env[1338]: time="2024-02-08T23:43:07.483426222Z" level=info msg="CreateContainer within sandbox \"a2bb22a863e54b0acae72b0643b4f904d2fcc7a96902b22a7e3e5fd3e83e8c14\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"82a2288737c85392c313dcd5d90571d877eec94c8823b2bc39f5acc3ef89e965\"" Feb 8 23:43:07.484162 env[1338]: time="2024-02-08T23:43:07.484113425Z" level=info msg="StartContainer for \"82a2288737c85392c313dcd5d90571d877eec94c8823b2bc39f5acc3ef89e965\"" Feb 8 23:43:07.498679 env[1338]: time="2024-02-08T23:43:07.498637098Z" level=info msg="CreateContainer within sandbox \"b4f952e70c1b1870da656276a06ae79ee70e6081643a38152869d15622229b2f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"997c61742db6484e67d3e14e7d9345f9989fcf0e7b2050b44e1c0415c46c480a\"" Feb 8 23:43:07.501671 env[1338]: time="2024-02-08T23:43:07.501618713Z" level=info msg="StartContainer for \"997c61742db6484e67d3e14e7d9345f9989fcf0e7b2050b44e1c0415c46c480a\"" Feb 8 23:43:07.510141 systemd[1]: Started cri-containerd-82a2288737c85392c313dcd5d90571d877eec94c8823b2bc39f5acc3ef89e965.scope. Feb 8 23:43:07.543632 systemd[1]: Started cri-containerd-997c61742db6484e67d3e14e7d9345f9989fcf0e7b2050b44e1c0415c46c480a.scope. Feb 8 23:43:07.572957 env[1338]: time="2024-02-08T23:43:07.572911473Z" level=info msg="StartContainer for \"82a2288737c85392c313dcd5d90571d877eec94c8823b2bc39f5acc3ef89e965\" returns successfully" Feb 8 23:43:07.595428 env[1338]: time="2024-02-08T23:43:07.595364086Z" level=info msg="StartContainer for \"997c61742db6484e67d3e14e7d9345f9989fcf0e7b2050b44e1c0415c46c480a\" returns successfully" Feb 8 23:43:08.262831 kubelet[2466]: I0208 23:43:08.262275 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-l7pdz" podStartSLOduration=21.262219236 pod.CreationTimestamp="2024-02-08 23:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:08.261864235 +0000 UTC m=+33.276170274" watchObservedRunningTime="2024-02-08 23:43:08.262219236 +0000 UTC m=+33.276525375" Feb 8 23:43:08.262831 kubelet[2466]: I0208 23:43:08.262680 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-ddw26" podStartSLOduration=-9.223372015592142e+09 pod.CreationTimestamp="2024-02-08 23:42:47 +0000 UTC" firstStartedPulling="2024-02-08 23:42:47.568097625 +0000 UTC m=+12.582403664" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:42:55.244861766 +0000 UTC m=+20.259167805" watchObservedRunningTime="2024-02-08 23:43:08.262633438 +0000 UTC m=+33.276939477" Feb 8 23:43:08.304788 kubelet[2466]: I0208 23:43:08.304747 2466 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-787d4945fb-mf4lk" podStartSLOduration=21.304711048 pod.CreationTimestamp="2024-02-08 23:42:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:43:08.289566473 +0000 UTC m=+33.303872512" watchObservedRunningTime="2024-02-08 23:43:08.304711048 +0000 UTC m=+33.319017087" Feb 8 23:43:08.843468 systemd-networkd[1493]: vethec780372: Gained IPv6LL Feb 8 23:43:08.907721 systemd-networkd[1493]: vethc2c6f377: Gained IPv6LL Feb 8 23:43:09.227412 systemd-networkd[1493]: cni0: Gained IPv6LL Feb 8 23:45:14.457484 systemd[1]: Started sshd@5-10.200.8.20:22-10.200.12.6:58720.service. Feb 8 23:45:15.072737 sshd[3842]: Accepted publickey for core from 10.200.12.6 port 58720 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:15.074401 sshd[3842]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:15.079622 systemd[1]: Started session-8.scope. Feb 8 23:45:15.080079 systemd-logind[1326]: New session 8 of user core. Feb 8 23:45:15.739619 sshd[3842]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:15.743122 systemd[1]: sshd@5-10.200.8.20:22-10.200.12.6:58720.service: Deactivated successfully. Feb 8 23:45:15.744323 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:45:15.745226 systemd-logind[1326]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:45:15.746090 systemd-logind[1326]: Removed session 8. Feb 8 23:45:20.844954 systemd[1]: Started sshd@6-10.200.8.20:22-10.200.12.6:38642.service. Feb 8 23:45:21.458748 sshd[3888]: Accepted publickey for core from 10.200.12.6 port 38642 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:21.460441 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:21.466258 systemd-logind[1326]: New session 9 of user core. Feb 8 23:45:21.466444 systemd[1]: Started session-9.scope. Feb 8 23:45:21.950359 sshd[3888]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:21.953270 systemd[1]: sshd@6-10.200.8.20:22-10.200.12.6:38642.service: Deactivated successfully. Feb 8 23:45:21.954594 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:45:21.955019 systemd-logind[1326]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:45:21.956069 systemd-logind[1326]: Removed session 9. Feb 8 23:45:27.057993 systemd[1]: Started sshd@7-10.200.8.20:22-10.200.12.6:55850.service. Feb 8 23:45:27.681863 sshd[3925]: Accepted publickey for core from 10.200.12.6 port 55850 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:27.683653 sshd[3925]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:27.688278 systemd-logind[1326]: New session 10 of user core. Feb 8 23:45:27.689096 systemd[1]: Started session-10.scope. Feb 8 23:45:28.182364 sshd[3925]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:28.185587 systemd[1]: sshd@7-10.200.8.20:22-10.200.12.6:55850.service: Deactivated successfully. Feb 8 23:45:28.186595 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:45:28.187332 systemd-logind[1326]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:45:28.188131 systemd-logind[1326]: Removed session 10. Feb 8 23:45:33.292288 systemd[1]: Started sshd@8-10.200.8.20:22-10.200.12.6:55856.service. Feb 8 23:45:33.911608 sshd[3957]: Accepted publickey for core from 10.200.12.6 port 55856 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:33.913427 sshd[3957]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:33.919575 systemd[1]: Started session-11.scope. Feb 8 23:45:33.921236 systemd-logind[1326]: New session 11 of user core. Feb 8 23:45:34.417082 sshd[3957]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:34.420582 systemd[1]: sshd@8-10.200.8.20:22-10.200.12.6:55856.service: Deactivated successfully. Feb 8 23:45:34.421685 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:45:34.422569 systemd-logind[1326]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:45:34.423648 systemd-logind[1326]: Removed session 11. Feb 8 23:45:34.526039 systemd[1]: Started sshd@9-10.200.8.20:22-10.200.12.6:55868.service. Feb 8 23:45:35.164880 sshd[3972]: Accepted publickey for core from 10.200.12.6 port 55868 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:35.167031 sshd[3972]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:35.172702 systemd[1]: Started session-12.scope. Feb 8 23:45:35.173200 systemd-logind[1326]: New session 12 of user core. Feb 8 23:45:35.775814 sshd[3972]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:35.779339 systemd[1]: sshd@9-10.200.8.20:22-10.200.12.6:55868.service: Deactivated successfully. Feb 8 23:45:35.780511 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:45:35.781360 systemd-logind[1326]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:45:35.782225 systemd-logind[1326]: Removed session 12. Feb 8 23:45:35.882073 systemd[1]: Started sshd@10-10.200.8.20:22-10.200.12.6:55872.service. Feb 8 23:45:36.500161 sshd[3996]: Accepted publickey for core from 10.200.12.6 port 55872 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:36.501797 sshd[3996]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:36.507538 systemd[1]: Started session-13.scope. Feb 8 23:45:36.508148 systemd-logind[1326]: New session 13 of user core. Feb 8 23:45:36.996547 sshd[3996]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:36.999702 systemd[1]: sshd@10-10.200.8.20:22-10.200.12.6:55872.service: Deactivated successfully. Feb 8 23:45:37.000613 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:45:37.001677 systemd-logind[1326]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:45:37.002537 systemd-logind[1326]: Removed session 13. Feb 8 23:45:42.110827 systemd[1]: Started sshd@11-10.200.8.20:22-10.200.12.6:49666.service. Feb 8 23:45:42.731681 sshd[4033]: Accepted publickey for core from 10.200.12.6 port 49666 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:42.733439 sshd[4033]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:42.738547 systemd-logind[1326]: New session 14 of user core. Feb 8 23:45:42.739064 systemd[1]: Started session-14.scope. Feb 8 23:45:43.229092 sshd[4033]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:43.232703 systemd[1]: sshd@11-10.200.8.20:22-10.200.12.6:49666.service: Deactivated successfully. Feb 8 23:45:43.233713 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:45:43.234489 systemd-logind[1326]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:45:43.235341 systemd-logind[1326]: Removed session 14. Feb 8 23:45:43.334305 systemd[1]: Started sshd@12-10.200.8.20:22-10.200.12.6:49676.service. Feb 8 23:45:43.964246 sshd[4045]: Accepted publickey for core from 10.200.12.6 port 49676 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:43.966058 sshd[4045]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:43.971781 systemd[1]: Started session-15.scope. Feb 8 23:45:43.972273 systemd-logind[1326]: New session 15 of user core. Feb 8 23:45:44.515090 sshd[4045]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:44.518296 systemd[1]: sshd@12-10.200.8.20:22-10.200.12.6:49676.service: Deactivated successfully. Feb 8 23:45:44.519352 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:45:44.520042 systemd-logind[1326]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:45:44.520877 systemd-logind[1326]: Removed session 15. Feb 8 23:45:44.622039 systemd[1]: Started sshd@13-10.200.8.20:22-10.200.12.6:49682.service. Feb 8 23:45:45.252507 sshd[4055]: Accepted publickey for core from 10.200.12.6 port 49682 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:45.255132 sshd[4055]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:45.260396 systemd[1]: Started session-16.scope. Feb 8 23:45:45.260899 systemd-logind[1326]: New session 16 of user core. Feb 8 23:45:46.750594 sshd[4055]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:46.754505 systemd[1]: sshd@13-10.200.8.20:22-10.200.12.6:49682.service: Deactivated successfully. Feb 8 23:45:46.755440 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:45:46.756162 systemd-logind[1326]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:45:46.757034 systemd-logind[1326]: Removed session 16. Feb 8 23:45:46.855128 systemd[1]: Started sshd@14-10.200.8.20:22-10.200.12.6:49686.service. Feb 8 23:45:47.473197 sshd[4132]: Accepted publickey for core from 10.200.12.6 port 49686 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:47.472102 sshd[4132]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:47.478940 systemd[1]: Started session-17.scope. Feb 8 23:45:47.479452 systemd-logind[1326]: New session 17 of user core. Feb 8 23:45:48.073345 sshd[4132]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:48.076192 systemd[1]: sshd@14-10.200.8.20:22-10.200.12.6:49686.service: Deactivated successfully. Feb 8 23:45:48.077296 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:45:48.078218 systemd-logind[1326]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:45:48.079045 systemd-logind[1326]: Removed session 17. Feb 8 23:45:48.177938 systemd[1]: Started sshd@15-10.200.8.20:22-10.200.12.6:53598.service. Feb 8 23:45:48.795815 sshd[4148]: Accepted publickey for core from 10.200.12.6 port 53598 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:48.797348 sshd[4148]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:48.802463 systemd[1]: Started session-18.scope. Feb 8 23:45:48.803052 systemd-logind[1326]: New session 18 of user core. Feb 8 23:45:49.291666 sshd[4148]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:49.295121 systemd[1]: sshd@15-10.200.8.20:22-10.200.12.6:53598.service: Deactivated successfully. Feb 8 23:45:49.296287 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:45:49.297135 systemd-logind[1326]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:45:49.298210 systemd-logind[1326]: Removed session 18. Feb 8 23:45:54.399307 systemd[1]: Started sshd@16-10.200.8.20:22-10.200.12.6:53606.service. Feb 8 23:45:55.037105 sshd[4207]: Accepted publickey for core from 10.200.12.6 port 53606 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:45:55.039269 sshd[4207]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:45:55.045003 systemd-logind[1326]: New session 19 of user core. Feb 8 23:45:55.045585 systemd[1]: Started session-19.scope. Feb 8 23:45:55.549094 sshd[4207]: pam_unix(sshd:session): session closed for user core Feb 8 23:45:55.552545 systemd[1]: sshd@16-10.200.8.20:22-10.200.12.6:53606.service: Deactivated successfully. Feb 8 23:45:55.553730 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:45:55.554683 systemd-logind[1326]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:45:55.555724 systemd-logind[1326]: Removed session 19. Feb 8 23:46:00.656332 systemd[1]: Started sshd@17-10.200.8.20:22-10.200.12.6:39556.service. Feb 8 23:46:01.279324 sshd[4249]: Accepted publickey for core from 10.200.12.6 port 39556 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:46:01.280929 sshd[4249]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:46:01.286292 systemd-logind[1326]: New session 20 of user core. Feb 8 23:46:01.286785 systemd[1]: Started session-20.scope. Feb 8 23:46:01.775594 sshd[4249]: pam_unix(sshd:session): session closed for user core Feb 8 23:46:01.779112 systemd[1]: sshd@17-10.200.8.20:22-10.200.12.6:39556.service: Deactivated successfully. Feb 8 23:46:01.780350 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:46:01.781272 systemd-logind[1326]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:46:01.782371 systemd-logind[1326]: Removed session 20. Feb 8 23:46:06.880854 systemd[1]: Started sshd@18-10.200.8.20:22-10.200.12.6:39568.service. Feb 8 23:46:07.499357 sshd[4279]: Accepted publickey for core from 10.200.12.6 port 39568 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:46:07.501460 sshd[4279]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:46:07.506796 systemd-logind[1326]: New session 21 of user core. Feb 8 23:46:07.507946 systemd[1]: Started session-21.scope. Feb 8 23:46:07.991537 sshd[4279]: pam_unix(sshd:session): session closed for user core Feb 8 23:46:07.994653 systemd[1]: sshd@18-10.200.8.20:22-10.200.12.6:39568.service: Deactivated successfully. Feb 8 23:46:07.995658 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:46:07.996402 systemd-logind[1326]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:46:07.997410 systemd-logind[1326]: Removed session 21. Feb 8 23:46:24.088060 systemd[1]: cri-containerd-34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33.scope: Deactivated successfully. Feb 8 23:46:24.088486 systemd[1]: cri-containerd-34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33.scope: Consumed 3.427s CPU time. Feb 8 23:46:24.112960 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33-rootfs.mount: Deactivated successfully. Feb 8 23:46:24.137372 env[1338]: time="2024-02-08T23:46:24.137313014Z" level=info msg="shim disconnected" id=34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33 Feb 8 23:46:24.137372 env[1338]: time="2024-02-08T23:46:24.137367915Z" level=warning msg="cleaning up after shim disconnected" id=34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33 namespace=k8s.io Feb 8 23:46:24.137372 env[1338]: time="2024-02-08T23:46:24.137380015Z" level=info msg="cleaning up dead shim" Feb 8 23:46:24.146759 env[1338]: time="2024-02-08T23:46:24.146706183Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:46:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4364 runtime=io.containerd.runc.v2\n" Feb 8 23:46:24.642959 kubelet[2466]: I0208 23:46:24.642913 2466 scope.go:115] "RemoveContainer" containerID="34e1a814aa2697ff8deb869dab79d98dd89cc4240928fe1e1222a914b42fff33" Feb 8 23:46:24.646933 env[1338]: time="2024-02-08T23:46:24.646878335Z" level=info msg="CreateContainer within sandbox \"cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 8 23:46:24.674617 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount696261623.mount: Deactivated successfully. Feb 8 23:46:24.688784 env[1338]: time="2024-02-08T23:46:24.688689540Z" level=info msg="CreateContainer within sandbox \"cd2ad050b7e3b12074b7b2aded1f694e1ff705e4409f390e56b32f124aa94cb2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c816fb1916b8fea70be0f5031c9b472346be1cefa99a1e11289315ac9fa8b60f\"" Feb 8 23:46:24.689410 env[1338]: time="2024-02-08T23:46:24.689380245Z" level=info msg="StartContainer for \"c816fb1916b8fea70be0f5031c9b472346be1cefa99a1e11289315ac9fa8b60f\"" Feb 8 23:46:24.708558 systemd[1]: Started cri-containerd-c816fb1916b8fea70be0f5031c9b472346be1cefa99a1e11289315ac9fa8b60f.scope. Feb 8 23:46:24.761560 env[1338]: time="2024-02-08T23:46:24.761495972Z" level=info msg="StartContainer for \"c816fb1916b8fea70be0f5031c9b472346be1cefa99a1e11289315ac9fa8b60f\" returns successfully" Feb 8 23:46:26.223722 kubelet[2466]: E0208 23:46:26.223652 2466 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.20:48260->10.200.8.13:2379: read: connection timed out Feb 8 23:46:26.227072 systemd[1]: cri-containerd-530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50.scope: Deactivated successfully. Feb 8 23:46:26.227418 systemd[1]: cri-containerd-530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50.scope: Consumed 1.475s CPU time. Feb 8 23:46:26.250985 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50-rootfs.mount: Deactivated successfully. Feb 8 23:46:26.264277 env[1338]: time="2024-02-08T23:46:26.264226405Z" level=info msg="shim disconnected" id=530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50 Feb 8 23:46:26.264277 env[1338]: time="2024-02-08T23:46:26.264284005Z" level=warning msg="cleaning up after shim disconnected" id=530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50 namespace=k8s.io Feb 8 23:46:26.264820 env[1338]: time="2024-02-08T23:46:26.264296805Z" level=info msg="cleaning up dead shim" Feb 8 23:46:26.273124 env[1338]: time="2024-02-08T23:46:26.273073869Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:46:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4435 runtime=io.containerd.runc.v2\n" Feb 8 23:46:26.650968 kubelet[2466]: I0208 23:46:26.650833 2466 scope.go:115] "RemoveContainer" containerID="530bb1adcd83ee99d6e555d5c2c8d4fdb88549ba98989d9569abf74d9bc26e50" Feb 8 23:46:26.653645 env[1338]: time="2024-02-08T23:46:26.653593728Z" level=info msg="CreateContainer within sandbox \"2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 8 23:46:26.677147 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2546683548.mount: Deactivated successfully. Feb 8 23:46:26.690500 env[1338]: time="2024-02-08T23:46:26.690444795Z" level=info msg="CreateContainer within sandbox \"2a00800cccc54a8f68b5f14af16ef5bd610e5eee4f029e6d99d1a95cf73457e5\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"d5a0aa8ce17608b8ebb5a41284f703693df33f63cda0b2325e43e12865670434\"" Feb 8 23:46:26.691108 env[1338]: time="2024-02-08T23:46:26.691073599Z" level=info msg="StartContainer for \"d5a0aa8ce17608b8ebb5a41284f703693df33f63cda0b2325e43e12865670434\"" Feb 8 23:46:26.710268 systemd[1]: Started cri-containerd-d5a0aa8ce17608b8ebb5a41284f703693df33f63cda0b2325e43e12865670434.scope. Feb 8 23:46:26.768828 env[1338]: time="2024-02-08T23:46:26.768769163Z" level=info msg="StartContainer for \"d5a0aa8ce17608b8ebb5a41284f703693df33f63cda0b2325e43e12865670434\" returns successfully" Feb 8 23:46:34.220785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.233882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.247208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.260816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.274525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.288385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.288794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.299871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.305439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.328440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.341753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.353169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.404320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.404550 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.404694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.404828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.404957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.405085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.405229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.405362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.405490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.417433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.451334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.456871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457450 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.457853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.469256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.480928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.481075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.481241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.492125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.503615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.503955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.504118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.515956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.522344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.538933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.545005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.545231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.545382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.545511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.555838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.567123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.567294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.567439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.585462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.625234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.625530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.625705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.625867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.626029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.626206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.626368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.626523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.626743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.637271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.637584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.648529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.660433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.671681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.682935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.683086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.683246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.683381 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.683508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.694360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.716378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.716528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.716640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.733918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.734094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.734346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.734479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.734608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.744488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.751265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.779255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.796832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.796990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.797803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.808256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.825873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.848546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.848800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.848970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.849212 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.849343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.849468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.849603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.860206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.876223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.897277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.903538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.903680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.903808 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.903935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.904067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.904218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.904372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.921022 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.937302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.942955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.954116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960201 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960600 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.960735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.971408 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.982391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:34.987873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.004278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.010131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.010295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.010409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.010518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.021527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.021730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.021866 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.038835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.044508 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.069435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.075509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.075644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.075790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.075923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.076051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.076199 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.086413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.086739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.098231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.098504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.109432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.109670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.115991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.127027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.127273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.135938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.144096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.144369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.155438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.160965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.161219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.172219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.177354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.177597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.193793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.194109 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.194300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.203559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.208776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.209011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.218439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.228303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.228611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.228752 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.244155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.244510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.244661 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.254241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.254515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.263824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.264063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.273478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.273738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.283196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.288713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.288864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.303496 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.303771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.303918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.313044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.313314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.322984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.328474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.328695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.338021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.338257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.355289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.371694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.388791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.395803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.395947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.396081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.396227 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.396356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.396487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.396620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.405141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.405400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.416803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.422544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.422763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.433151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.433385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.444066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.444335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.455979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.461800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.462007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.477797 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.478052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.478210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.489027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.494732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.494867 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.505766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.505994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.516925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.517152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.528565 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.550030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.560915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.561055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.571834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.572008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.572123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.572295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.577843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.577986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.578113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.590059 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.608115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.632865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.633715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.641322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.641556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.652005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.652388 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.663052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.663299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.673774 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.674016 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.689524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.689783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.689918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.700477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.712479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.712744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.712872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.718299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.728884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.729152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.739597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.739854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.750281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.750538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.761912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.762168 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.773014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.773321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.784181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.784451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.794992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.795244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.806133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.806438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.817542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.817824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.828724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.828965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.839539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.839789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.851200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.851428 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.863194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.863440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.880190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.923710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.924936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.925064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.925198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.935019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.935326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.946099 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.946334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.957083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.957353 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.973312 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.973570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.973713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.984263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:35.991614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.034796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.034972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.035812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.048210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.059582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.076350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.087726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.098929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099823 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.099947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.100070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.110616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.110870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.122298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.155743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.155923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.156079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.156231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.156363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.156487 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.156617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.172699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.211974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.212100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.212238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.225048 kubelet[2466]: E0208 23:46:36.224771 2466 controller.go:189] failed to update lease, error: Put "https://10.200.8.20:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-65dd02f9dc?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 8 23:46:36.235726 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.269383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.269636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.269779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.269907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.270035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.270166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.270315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.270445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.270575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.280930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.318777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.330386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.330534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.330679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.330813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.330936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.331058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.331189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.331322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.331452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.341378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.341688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.351997 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.384947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.385072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.401278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.401637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.401805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.407267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.417826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.429004 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.429136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.434671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.441744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.441881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.452140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.485021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.496354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.496518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.496652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.496780 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.496911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.497030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.497154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.497285 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.507894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.535853 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.552588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.552868 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.553699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.569324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.597548 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.603733 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.603897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.604029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.604161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.604323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.604458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.604590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.615809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.616191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.621697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.632603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.632895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.649448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.655471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.655619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.655748 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.666789 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.667087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.678341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.678683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.689596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.689846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.705576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.705894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.706031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.717154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.723475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.734851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.745914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.751608 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.757213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.757355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.762893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.768513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.768652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.768782 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.780308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.796963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.808117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.824944 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.825818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.842742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.853981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.854127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.854288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.854430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.866672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.883766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.883913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.884045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.884169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.900869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.922460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.922668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.934912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.935060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.935206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.935336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.935459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.935585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.946571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.946974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.957054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.957288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.969724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.995075 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.995260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.995389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.995517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:46:36.995638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#284 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001