Feb 8 23:12:57.015734 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:12:57.015767 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:12:57.015781 kernel: BIOS-provided physical RAM map: Feb 8 23:12:57.015790 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:12:57.015799 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:12:57.015808 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:12:57.015821 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:12:57.015832 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:12:57.015842 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:12:57.015851 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:12:57.015861 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:12:57.015871 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:12:57.015880 kernel: NX (Execute Disable) protection: active Feb 8 23:12:57.015890 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:12:57.015905 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 8 23:12:57.015916 kernel: random: crng init done Feb 8 23:12:57.015927 kernel: SMBIOS 3.1.0 present. Feb 8 23:12:57.015938 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:12:57.015949 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:12:57.015960 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:12:57.015971 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:12:57.015982 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:12:57.015995 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:12:57.016005 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:12:57.016017 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:12:57.016027 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:12:57.016038 kernel: tsc: Detected 2593.905 MHz processor Feb 8 23:12:57.016049 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:12:57.016060 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:12:57.016072 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:12:57.016083 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:12:57.016094 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:12:57.016108 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:12:57.016120 kernel: Using GB pages for direct mapping Feb 8 23:12:57.016132 kernel: Secure boot disabled Feb 8 23:12:57.016143 kernel: ACPI: Early table checksum verification disabled Feb 8 23:12:57.016155 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:12:57.016167 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016178 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016190 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:12:57.016208 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:12:57.016220 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016232 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016244 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016256 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016268 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016283 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016295 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:12:57.016308 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:12:57.016320 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:12:57.016331 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:12:57.016343 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:12:57.016356 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:12:57.016368 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:12:57.016383 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:12:57.016396 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:12:57.016408 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:12:57.016420 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:12:57.016433 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:12:57.016445 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:12:57.016457 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:12:57.016469 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:12:57.016499 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:12:57.016514 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:12:57.016526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:12:57.016538 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:12:57.016551 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:12:57.016564 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:12:57.016576 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:12:57.016589 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:12:57.016601 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:12:57.016614 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:12:57.016629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:12:57.016642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:12:57.016653 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:12:57.016665 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:12:57.016678 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:12:57.016689 kernel: NODE_DATA(0) allocated [mem 0x2bfff9000-0x2bfffefff] Feb 8 23:12:57.016702 kernel: Zone ranges: Feb 8 23:12:57.016714 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:12:57.016727 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:12:57.016742 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:12:57.016755 kernel: Movable zone start for each node Feb 8 23:12:57.016768 kernel: Early memory node ranges Feb 8 23:12:57.016781 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:12:57.016794 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:12:57.016806 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:12:57.016818 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:12:57.016831 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:12:57.016843 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:12:57.016858 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:12:57.016870 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:12:57.016882 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:12:57.016894 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:12:57.016906 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:12:57.016919 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:12:57.016930 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:12:57.016942 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:12:57.016954 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:12:57.016970 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:12:57.016983 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:12:57.016997 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:12:57.017010 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:12:57.017024 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:12:57.017036 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:12:57.017047 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:12:57.017059 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:12:57.017070 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:12:57.017085 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:12:57.017097 kernel: Policy zone: Normal Feb 8 23:12:57.017111 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:12:57.017123 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:12:57.017135 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:12:57.017147 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:12:57.017159 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:12:57.017171 kernel: Memory: 8073728K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 313472K reserved, 0K cma-reserved) Feb 8 23:12:57.017186 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:12:57.017199 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:12:57.017220 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:12:57.017235 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:12:57.017249 kernel: rcu: RCU event tracing is enabled. Feb 8 23:12:57.017262 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:12:57.017275 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:12:57.017288 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:12:57.017301 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:12:57.017314 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:12:57.017327 kernel: Using NULL legacy PIC Feb 8 23:12:57.017343 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:12:57.017356 kernel: Console: colour dummy device 80x25 Feb 8 23:12:57.017369 kernel: printk: console [tty1] enabled Feb 8 23:12:57.017382 kernel: printk: console [ttyS0] enabled Feb 8 23:12:57.017394 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:12:57.017410 kernel: ACPI: Core revision 20210730 Feb 8 23:12:57.017423 kernel: Failed to register legacy timer interrupt Feb 8 23:12:57.017436 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:12:57.017449 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:12:57.017462 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 8 23:12:57.017496 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:12:57.017510 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:12:57.017523 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:12:57.017536 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:12:57.017549 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:12:57.017564 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:12:57.017578 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:12:57.017590 kernel: RETBleed: Vulnerable Feb 8 23:12:57.017603 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:12:57.017616 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:12:57.017629 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:12:57.017642 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:12:57.017655 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:12:57.017668 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:12:57.017680 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:12:57.017695 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:12:57.017707 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:12:57.017719 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:12:57.017733 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:12:57.017745 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:12:57.017758 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:12:57.017769 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:12:57.017783 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:12:57.017796 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:12:57.017808 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:12:57.017821 kernel: LSM: Security Framework initializing Feb 8 23:12:57.017834 kernel: SELinux: Initializing. Feb 8 23:12:57.017850 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:12:57.017863 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:12:57.017876 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:12:57.017889 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:12:57.017902 kernel: signal: max sigframe size: 3632 Feb 8 23:12:57.017914 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:12:57.017927 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:12:57.017939 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:12:57.017951 kernel: x86: Booting SMP configuration: Feb 8 23:12:57.017965 kernel: .... node #0, CPUs: #1 Feb 8 23:12:57.017981 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:12:57.017996 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:12:57.018009 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:12:57.018022 kernel: smpboot: Max logical packages: 1 Feb 8 23:12:57.018036 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:12:57.018048 kernel: devtmpfs: initialized Feb 8 23:12:57.018061 kernel: x86/mm: Memory block size: 128MB Feb 8 23:12:57.018074 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:12:57.018091 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:12:57.018104 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:12:57.018118 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:12:57.018130 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:12:57.018143 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:12:57.018157 kernel: audit: type=2000 audit(1707433976.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:12:57.018170 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:12:57.018184 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:12:57.018195 kernel: cpuidle: using governor menu Feb 8 23:12:57.018212 kernel: ACPI: bus type PCI registered Feb 8 23:12:57.018223 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:12:57.018235 kernel: dca service started, version 1.12.1 Feb 8 23:12:57.018247 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:12:57.018260 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:12:57.018274 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:12:57.018288 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:12:57.018302 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:12:57.018314 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:12:57.018330 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:12:57.018343 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:12:57.018356 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:12:57.018369 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:12:57.018384 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:12:57.018398 kernel: ACPI: Interpreter enabled Feb 8 23:12:57.018413 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:12:57.018426 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:12:57.018440 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:12:57.018458 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:12:57.018491 kernel: iommu: Default domain type: Translated Feb 8 23:12:57.018509 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:12:57.018537 kernel: vgaarb: loaded Feb 8 23:12:57.018552 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:12:57.018565 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:12:57.018576 kernel: PTP clock support registered Feb 8 23:12:57.018587 kernel: Registered efivars operations Feb 8 23:12:57.018598 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:12:57.018609 kernel: PCI: System does not support PCI Feb 8 23:12:57.018624 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:12:57.018637 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:12:57.018651 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:12:57.018664 kernel: pnp: PnP ACPI init Feb 8 23:12:57.018677 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:12:57.018691 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:12:57.018705 kernel: NET: Registered PF_INET protocol family Feb 8 23:12:57.018718 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:12:57.018735 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:12:57.018748 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:12:57.018762 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:12:57.018775 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:12:57.018788 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:12:57.018801 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:12:57.018814 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:12:57.018827 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:12:57.018841 kernel: NET: Registered PF_XDP protocol family Feb 8 23:12:57.018856 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:12:57.018867 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:12:57.018879 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:12:57.018890 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:12:57.018901 kernel: Initialise system trusted keyrings Feb 8 23:12:57.018912 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:12:57.018923 kernel: Key type asymmetric registered Feb 8 23:12:57.018934 kernel: Asymmetric key parser 'x509' registered Feb 8 23:12:57.018942 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:12:57.020629 kernel: io scheduler mq-deadline registered Feb 8 23:12:57.020645 kernel: io scheduler kyber registered Feb 8 23:12:57.020658 kernel: io scheduler bfq registered Feb 8 23:12:57.020671 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:12:57.020683 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:12:57.020696 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:12:57.020708 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:12:57.020721 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:12:57.020869 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:12:57.020976 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:12:56 UTC (1707433976) Feb 8 23:12:57.021099 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:12:57.021116 kernel: fail to initialize ptp_kvm Feb 8 23:12:57.021129 kernel: intel_pstate: CPU model not supported Feb 8 23:12:57.021144 kernel: efifb: probing for efifb Feb 8 23:12:57.021156 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:12:57.021166 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:12:57.021178 kernel: efifb: scrolling: redraw Feb 8 23:12:57.021194 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:12:57.021206 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:12:57.021218 kernel: fb0: EFI VGA frame buffer device Feb 8 23:12:57.021230 kernel: pstore: Registered efi as persistent store backend Feb 8 23:12:57.021243 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:12:57.021255 kernel: Segment Routing with IPv6 Feb 8 23:12:57.021267 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:12:57.021280 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:12:57.021293 kernel: Key type dns_resolver registered Feb 8 23:12:57.021308 kernel: IPI shorthand broadcast: enabled Feb 8 23:12:57.021322 kernel: sched_clock: Marking stable (699040100, 21210800)->(895993400, -175742500) Feb 8 23:12:57.021336 kernel: registered taskstats version 1 Feb 8 23:12:57.021350 kernel: Loading compiled-in X.509 certificates Feb 8 23:12:57.021364 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:12:57.021377 kernel: Key type .fscrypt registered Feb 8 23:12:57.021391 kernel: Key type fscrypt-provisioning registered Feb 8 23:12:57.021404 kernel: pstore: Using crash dump compression: deflate Feb 8 23:12:57.021418 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:12:57.021430 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:12:57.021442 kernel: ima: No architecture policies found Feb 8 23:12:57.021455 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:12:57.021468 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:12:57.021492 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:12:57.021505 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:12:57.021518 kernel: Run /init as init process Feb 8 23:12:57.021532 kernel: with arguments: Feb 8 23:12:57.021546 kernel: /init Feb 8 23:12:57.021562 kernel: with environment: Feb 8 23:12:57.021576 kernel: HOME=/ Feb 8 23:12:57.021589 kernel: TERM=linux Feb 8 23:12:57.021603 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:12:57.021620 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:12:57.021638 systemd[1]: Detected virtualization microsoft. Feb 8 23:12:57.021653 systemd[1]: Detected architecture x86-64. Feb 8 23:12:57.021671 systemd[1]: Running in initrd. Feb 8 23:12:57.021684 systemd[1]: No hostname configured, using default hostname. Feb 8 23:12:57.021697 systemd[1]: Hostname set to . Feb 8 23:12:57.021713 systemd[1]: Initializing machine ID from random generator. Feb 8 23:12:57.021727 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:12:57.021740 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:12:57.021752 systemd[1]: Reached target cryptsetup.target. Feb 8 23:12:57.021766 systemd[1]: Reached target paths.target. Feb 8 23:12:57.021779 systemd[1]: Reached target slices.target. Feb 8 23:12:57.021794 systemd[1]: Reached target swap.target. Feb 8 23:12:57.021807 systemd[1]: Reached target timers.target. Feb 8 23:12:57.021819 systemd[1]: Listening on iscsid.socket. Feb 8 23:12:57.021831 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:12:57.021844 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:12:57.021857 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:12:57.021871 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:12:57.021888 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:12:57.021902 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:12:57.021916 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:12:57.021930 systemd[1]: Reached target sockets.target. Feb 8 23:12:57.021944 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:12:57.021958 systemd[1]: Finished network-cleanup.service. Feb 8 23:12:57.021972 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:12:57.021986 systemd[1]: Starting systemd-journald.service... Feb 8 23:12:57.022000 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:12:57.022017 systemd[1]: Starting systemd-resolved.service... Feb 8 23:12:57.022031 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:12:57.022048 systemd-journald[183]: Journal started Feb 8 23:12:57.022111 systemd-journald[183]: Runtime Journal (/run/log/journal/17c9d342405c4e99a32acc921d8bd01e) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:12:57.019774 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:12:57.037591 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:12:57.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.056469 kernel: audit: type=1130 audit(1707433977.042:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.056505 systemd[1]: Started systemd-journald.service. Feb 8 23:12:57.068505 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:12:57.070862 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:12:57.075932 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:12:57.109939 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:12:57.120139 kernel: audit: type=1130 audit(1707433977.084:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.120164 kernel: Bridge firewalling registered Feb 8 23:12:57.120174 kernel: audit: type=1130 audit(1707433977.109:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.084000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.084527 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:12:57.106702 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:12:57.106896 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:12:57.110074 systemd[1]: Started systemd-resolved.service. Feb 8 23:12:57.122535 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:12:57.126370 systemd[1]: Reached target nss-lookup.target. Feb 8 23:12:57.166364 kernel: audit: type=1130 audit(1707433977.121:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.166395 kernel: audit: type=1130 audit(1707433977.125:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.121000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.125000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.132243 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:12:57.168294 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:12:57.175871 kernel: SCSI subsystem initialized Feb 8 23:12:57.180839 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:12:57.185584 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:12:57.184000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.205867 kernel: audit: type=1130 audit(1707433977.184:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.204925 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:12:57.216838 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:12:57.216881 kernel: audit: type=1130 audit(1707433977.206:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.227562 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:12:57.229385 dracut-cmdline[200]: dracut-dracut-053 Feb 8 23:12:57.236623 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:12:57.237467 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:12:57.268052 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:12:57.270000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.287491 kernel: audit: type=1130 audit(1707433977.270:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.268920 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:12:57.272230 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:12:57.299287 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:12:57.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.314555 kernel: audit: type=1130 audit(1707433977.300:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.340495 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:12:57.353496 kernel: iscsi: registered transport (tcp) Feb 8 23:12:57.377492 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:12:57.377556 kernel: QLogic iSCSI HBA Driver Feb 8 23:12:57.406895 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:12:57.410000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.411302 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:12:57.461499 kernel: raid6: avx512x4 gen() 18442 MB/s Feb 8 23:12:57.480491 kernel: raid6: avx512x4 xor() 8593 MB/s Feb 8 23:12:57.499487 kernel: raid6: avx512x2 gen() 18429 MB/s Feb 8 23:12:57.519492 kernel: raid6: avx512x2 xor() 29970 MB/s Feb 8 23:12:57.539486 kernel: raid6: avx512x1 gen() 18483 MB/s Feb 8 23:12:57.558492 kernel: raid6: avx512x1 xor() 27009 MB/s Feb 8 23:12:57.578489 kernel: raid6: avx2x4 gen() 18300 MB/s Feb 8 23:12:57.599488 kernel: raid6: avx2x4 xor() 7590 MB/s Feb 8 23:12:57.618486 kernel: raid6: avx2x2 gen() 18269 MB/s Feb 8 23:12:57.638492 kernel: raid6: avx2x2 xor() 22338 MB/s Feb 8 23:12:57.657486 kernel: raid6: avx2x1 gen() 14081 MB/s Feb 8 23:12:57.676486 kernel: raid6: avx2x1 xor() 19550 MB/s Feb 8 23:12:57.696488 kernel: raid6: sse2x4 gen() 11758 MB/s Feb 8 23:12:57.715486 kernel: raid6: sse2x4 xor() 7289 MB/s Feb 8 23:12:57.734485 kernel: raid6: sse2x2 gen() 12890 MB/s Feb 8 23:12:57.754486 kernel: raid6: sse2x2 xor() 7549 MB/s Feb 8 23:12:57.774487 kernel: raid6: sse2x1 gen() 11676 MB/s Feb 8 23:12:57.796999 kernel: raid6: sse2x1 xor() 5947 MB/s Feb 8 23:12:57.797017 kernel: raid6: using algorithm avx512x1 gen() 18483 MB/s Feb 8 23:12:57.797029 kernel: raid6: .... xor() 27009 MB/s, rmw enabled Feb 8 23:12:57.800688 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:12:57.819497 kernel: xor: automatically using best checksumming function avx Feb 8 23:12:57.915501 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:12:57.923414 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:12:57.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.926000 audit: BPF prog-id=7 op=LOAD Feb 8 23:12:57.926000 audit: BPF prog-id=8 op=LOAD Feb 8 23:12:57.927919 systemd[1]: Starting systemd-udevd.service... Feb 8 23:12:57.942689 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 8 23:12:57.949316 systemd[1]: Started systemd-udevd.service. Feb 8 23:12:57.949000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:57.951389 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:12:57.969517 dracut-pre-trigger[385]: rd.md=0: removing MD RAID activation Feb 8 23:12:57.998485 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:12:58.001137 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:12:57.999000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:58.038594 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:12:58.042000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:12:58.082495 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:12:58.102500 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:12:58.113499 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:12:58.129491 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:12:58.131782 kernel: scsi host0: storvsc_host_t Feb 8 23:12:58.131985 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:12:58.132013 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:12:58.137494 kernel: scsi host1: storvsc_host_t Feb 8 23:12:58.152492 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:12:58.152729 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:12:58.152883 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:12:58.153031 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:12:58.153485 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:12:58.156498 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:12:58.156549 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:12:58.158489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:12:58.160494 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:12:58.161487 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:12:58.161517 kernel: AES CTR mode by8 optimization enabled Feb 8 23:12:58.177484 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:12:58.177665 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:12:58.199491 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:12:58.199541 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:12:58.229504 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:12:58.249870 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:12:58.249934 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:12:58.367901 kernel: hv_netvsc 000d3ab9-af5b-000d-3ab9-af5b000d3ab9 eth0: VF slot 1 added Feb 8 23:12:58.376495 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:12:58.381491 kernel: hv_pci 2013036c-9020-4504-9e02-31771484fffb: PCI VMBus probing: Using version 0x10004 Feb 8 23:12:58.393877 kernel: hv_pci 2013036c-9020-4504-9e02-31771484fffb: PCI host bridge to bus 9020:00 Feb 8 23:12:58.394074 kernel: pci_bus 9020:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:12:58.394222 kernel: pci_bus 9020:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:12:58.403669 kernel: pci 9020:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:12:58.413514 kernel: pci 9020:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:12:58.438562 kernel: pci 9020:00:02.0: enabling Extended Tags Feb 8 23:12:58.451571 kernel: pci 9020:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 9020:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:12:58.459762 kernel: pci_bus 9020:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:12:58.459975 kernel: pci 9020:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:12:58.552500 kernel: mlx5_core 9020:00:02.0: firmware version: 14.30.1350 Feb 8 23:12:58.662910 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:12:58.710496 kernel: mlx5_core 9020:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:12:58.718501 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (440) Feb 8 23:12:58.731406 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:12:58.873439 kernel: mlx5_core 9020:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:12:58.873691 kernel: mlx5_core 9020:00:02.0: mlx5e_tc_post_act_init:40:(pid 357): firmware level support is missing Feb 8 23:12:58.876187 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:12:58.891616 kernel: hv_netvsc 000d3ab9-af5b-000d-3ab9-af5b000d3ab9 eth0: VF registering: eth1 Feb 8 23:12:58.891813 kernel: mlx5_core 9020:00:02.0 eth1: joined to eth0 Feb 8 23:12:58.903498 kernel: mlx5_core 9020:00:02.0 enP36896s1: renamed from eth1 Feb 8 23:12:58.911643 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:12:58.916796 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:12:58.928805 systemd[1]: Starting disk-uuid.service... Feb 8 23:12:59.948146 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:12:59.948214 disk-uuid[550]: The operation has completed successfully. Feb 8 23:13:00.020178 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:13:00.020280 systemd[1]: Finished disk-uuid.service. Feb 8 23:13:00.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.027891 systemd[1]: Starting verity-setup.service... Feb 8 23:13:00.066504 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:13:00.423547 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:13:00.429908 systemd[1]: Finished verity-setup.service. Feb 8 23:13:00.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.434371 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:13:00.506504 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:13:00.506989 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:13:00.509330 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:13:00.510129 systemd[1]: Starting ignition-setup.service... Feb 8 23:13:00.549727 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:13:00.549761 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:13:00.549780 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:13:00.521385 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:13:00.592705 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:13:00.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.598000 audit: BPF prog-id=9 op=LOAD Feb 8 23:13:00.599503 systemd[1]: Starting systemd-networkd.service... Feb 8 23:13:00.622155 systemd-networkd[817]: lo: Link UP Feb 8 23:13:00.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.622165 systemd-networkd[817]: lo: Gained carrier Feb 8 23:13:00.623098 systemd-networkd[817]: Enumeration completed Feb 8 23:13:00.623183 systemd[1]: Started systemd-networkd.service. Feb 8 23:13:00.626637 systemd[1]: Reached target network.target. Feb 8 23:13:00.628069 systemd-networkd[817]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:13:00.642000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.632066 systemd[1]: Starting iscsiuio.service... Feb 8 23:13:00.639783 systemd[1]: Started iscsiuio.service. Feb 8 23:13:00.646252 systemd[1]: Starting iscsid.service... Feb 8 23:13:00.649666 iscsid[825]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:13:00.652705 iscsid[825]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:13:00.652705 iscsid[825]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:13:00.652705 iscsid[825]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:13:00.652705 iscsid[825]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:13:00.652705 iscsid[825]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:13:00.666446 systemd[1]: Started iscsid.service. Feb 8 23:13:00.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.678202 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:13:00.690334 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:13:00.694111 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:13:00.695000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.696096 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:13:00.710164 kernel: mlx5_core 9020:00:02.0 enP36896s1: Link up Feb 8 23:13:00.702718 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:13:00.710151 systemd[1]: Reached target remote-fs.target. Feb 8 23:13:00.716267 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:13:00.724438 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:13:00.727000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.792518 kernel: hv_netvsc 000d3ab9-af5b-000d-3ab9-af5b000d3ab9 eth0: Data path switched to VF: enP36896s1 Feb 8 23:13:00.792846 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:13:00.794000 systemd-networkd[817]: enP36896s1: Link UP Feb 8 23:13:00.794133 systemd-networkd[817]: eth0: Link UP Feb 8 23:13:00.794331 systemd-networkd[817]: eth0: Gained carrier Feb 8 23:13:00.798671 systemd-networkd[817]: enP36896s1: Gained carrier Feb 8 23:13:00.829574 systemd-networkd[817]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:13:00.894907 systemd[1]: Finished ignition-setup.service. Feb 8 23:13:00.896000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:00.897907 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:13:02.257752 systemd-networkd[817]: eth0: Gained IPv6LL Feb 8 23:13:04.431046 ignition[844]: Ignition 2.14.0 Feb 8 23:13:04.431065 ignition[844]: Stage: fetch-offline Feb 8 23:13:04.431151 ignition[844]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:04.431220 ignition[844]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:04.571508 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:04.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.573025 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:13:04.595344 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:13:04.595375 kernel: audit: type=1130 audit(1707433984.576:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.571688 ignition[844]: parsed url from cmdline: "" Feb 8 23:13:04.578567 systemd[1]: Starting ignition-fetch.service... Feb 8 23:13:04.571692 ignition[844]: no config URL provided Feb 8 23:13:04.571697 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:13:04.571706 ignition[844]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:13:04.571711 ignition[844]: failed to fetch config: resource requires networking Feb 8 23:13:04.571949 ignition[844]: Ignition finished successfully Feb 8 23:13:04.586924 ignition[850]: Ignition 2.14.0 Feb 8 23:13:04.586932 ignition[850]: Stage: fetch Feb 8 23:13:04.587032 ignition[850]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:04.587057 ignition[850]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:04.590380 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:04.593063 ignition[850]: parsed url from cmdline: "" Feb 8 23:13:04.593383 ignition[850]: no config URL provided Feb 8 23:13:04.593390 ignition[850]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:13:04.593404 ignition[850]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:13:04.593435 ignition[850]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:13:04.698739 ignition[850]: GET result: OK Feb 8 23:13:04.698866 ignition[850]: config has been read from IMDS userdata Feb 8 23:13:04.698921 ignition[850]: parsing config with SHA512: f872c9567ca67eb6b28a2f7f2ee5f2696c6507354e5bdb7d7bbf8d64653b1c0774b62fad5af0171635ecbf72807c597a33635bb3f222168c3b1222c616634197 Feb 8 23:13:04.731844 unknown[850]: fetched base config from "system" Feb 8 23:13:04.731857 unknown[850]: fetched base config from "system" Feb 8 23:13:04.731865 unknown[850]: fetched user config from "azure" Feb 8 23:13:04.737637 ignition[850]: fetch: fetch complete Feb 8 23:13:04.737647 ignition[850]: fetch: fetch passed Feb 8 23:13:04.737705 ignition[850]: Ignition finished successfully Feb 8 23:13:04.742486 systemd[1]: Finished ignition-fetch.service. Feb 8 23:13:04.745000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.746849 systemd[1]: Starting ignition-kargs.service... Feb 8 23:13:04.760190 kernel: audit: type=1130 audit(1707433984.745:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.766446 ignition[856]: Ignition 2.14.0 Feb 8 23:13:04.766457 ignition[856]: Stage: kargs Feb 8 23:13:04.766616 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:04.766648 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:04.774862 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:04.779395 ignition[856]: kargs: kargs passed Feb 8 23:13:04.779451 ignition[856]: Ignition finished successfully Feb 8 23:13:04.783265 systemd[1]: Finished ignition-kargs.service. Feb 8 23:13:04.786000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.787723 systemd[1]: Starting ignition-disks.service... Feb 8 23:13:04.801738 kernel: audit: type=1130 audit(1707433984.786:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.803203 ignition[862]: Ignition 2.14.0 Feb 8 23:13:04.803212 ignition[862]: Stage: disks Feb 8 23:13:04.803339 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:04.803373 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:04.808524 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:04.810051 ignition[862]: disks: disks passed Feb 8 23:13:04.810091 ignition[862]: Ignition finished successfully Feb 8 23:13:04.815122 systemd[1]: Finished ignition-disks.service. Feb 8 23:13:04.840504 kernel: audit: type=1130 audit(1707433984.816:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.817029 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:13:04.829223 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:13:04.829602 systemd[1]: Reached target local-fs.target. Feb 8 23:13:04.829992 systemd[1]: Reached target sysinit.target. Feb 8 23:13:04.830376 systemd[1]: Reached target basic.target. Feb 8 23:13:04.831715 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:13:04.910388 systemd-fsck[870]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:13:04.914993 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:13:04.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.919124 systemd[1]: Mounting sysroot.mount... Feb 8 23:13:04.933799 kernel: audit: type=1130 audit(1707433984.917:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:04.941497 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:13:04.942299 systemd[1]: Mounted sysroot.mount. Feb 8 23:13:04.945871 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:13:04.979068 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:13:04.985212 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:13:04.990178 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:13:04.990222 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:13:04.998669 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:13:05.056337 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:13:05.062184 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:13:05.069872 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (881) Feb 8 23:13:05.081321 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:13:05.081383 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:13:05.081403 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:13:05.082445 initrd-setup-root[886]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:13:05.089932 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:13:05.107330 initrd-setup-root[912]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:13:05.111895 initrd-setup-root[920]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:13:05.133728 initrd-setup-root[928]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:13:05.635033 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:13:05.637000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:05.638456 systemd[1]: Starting ignition-mount.service... Feb 8 23:13:05.655673 kernel: audit: type=1130 audit(1707433985.637:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:05.656140 systemd[1]: Starting sysroot-boot.service... Feb 8 23:13:05.660341 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:13:05.662719 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:13:05.677950 ignition[946]: INFO : Ignition 2.14.0 Feb 8 23:13:05.679958 ignition[946]: INFO : Stage: mount Feb 8 23:13:05.679958 ignition[946]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:05.679958 ignition[946]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:05.689285 ignition[946]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:05.691871 ignition[946]: INFO : mount: mount passed Feb 8 23:13:05.691871 ignition[946]: INFO : Ignition finished successfully Feb 8 23:13:05.693186 systemd[1]: Finished ignition-mount.service. Feb 8 23:13:05.712706 kernel: audit: type=1130 audit(1707433985.697:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:05.697000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:05.721138 systemd[1]: Finished sysroot-boot.service. Feb 8 23:13:05.724000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:05.736502 kernel: audit: type=1130 audit(1707433985.724:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:06.684692 coreos-metadata[880]: Feb 08 23:13:06.684 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:13:06.702179 coreos-metadata[880]: Feb 08 23:13:06.702 INFO Fetch successful Feb 8 23:13:06.736889 coreos-metadata[880]: Feb 08 23:13:06.736 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:13:06.749913 coreos-metadata[880]: Feb 08 23:13:06.749 INFO Fetch successful Feb 8 23:13:06.767227 coreos-metadata[880]: Feb 08 23:13:06.767 INFO wrote hostname ci-3510.3.2-a-56a09d6613 to /sysroot/etc/hostname Feb 8 23:13:06.772448 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:13:06.775383 systemd[1]: Starting ignition-files.service... Feb 8 23:13:06.774000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:06.791489 kernel: audit: type=1130 audit(1707433986.774:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:13:06.794082 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:13:06.805492 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (960) Feb 8 23:13:06.805526 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:13:06.812881 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:13:06.812906 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:13:06.821739 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:13:06.835047 ignition[979]: INFO : Ignition 2.14.0 Feb 8 23:13:06.835047 ignition[979]: INFO : Stage: files Feb 8 23:13:06.839012 ignition[979]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:13:06.839012 ignition[979]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:13:06.846897 ignition[979]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:13:06.860244 ignition[979]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:13:06.863088 ignition[979]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:13:06.863088 ignition[979]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:13:06.947037 ignition[979]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:13:06.950973 ignition[979]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:13:06.977861 unknown[979]: wrote ssh authorized keys file for user: core Feb 8 23:13:06.980494 ignition[979]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:13:07.002737 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:13:07.007561 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:13:12.645513 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:13:12.780361 ignition[979]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:13:12.787992 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:13:12.787992 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:13:12.787992 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 8 23:13:12.877894 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:13:12.981628 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 8 23:13:12.981628 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:13:12.991663 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:13:13.542566 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:13:13.737143 ignition[979]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:13:13.748494 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:13:13.748494 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:13:13.748494 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 8 23:13:13.950407 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:13:14.185411 ignition[979]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 8 23:13:14.193513 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 8 23:13:14.193513 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:13:14.193513 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:13:14.991202 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 8 23:13:38.901596 ignition[979]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 8 23:13:38.908820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:13:38.908820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:13:38.908820 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:13:39.468094 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 8 23:14:28.278264 ignition[979]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 8 23:14:28.278264 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:14:28.293754 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:14:28.293754 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:14:28.293754 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:14:28.293754 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 8 23:14:28.904402 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 8 23:14:29.393916 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 8 23:14:29.398363 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:14:29.398363 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:14:29.398363 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:14:29.398363 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 8 23:14:29.398363 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:14:29.419550 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 8 23:14:29.419550 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:14:29.419550 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 8 23:14:29.419550 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:14:29.419550 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:14:29.438879 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:14:29.438879 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:14:29.452192 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (982) Feb 8 23:14:29.452222 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588133218" Feb 8 23:14:29.452222 ignition[979]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588133218": device or resource busy Feb 8 23:14:29.452222 ignition[979]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2588133218", trying btrfs: device or resource busy Feb 8 23:14:29.452222 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588133218" Feb 8 23:14:29.471917 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2588133218" Feb 8 23:14:29.471917 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem2588133218" Feb 8 23:14:29.480215 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem2588133218" Feb 8 23:14:29.480215 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:14:29.480215 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:14:29.480215 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:14:29.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.472990 systemd[1]: mnt-oem2588133218.mount: Deactivated successfully. Feb 8 23:14:29.514253 kernel: audit: type=1130 audit(1707434069.494:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2074670996" Feb 8 23:14:29.514286 ignition[979]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2074670996": device or resource busy Feb 8 23:14:29.514286 ignition[979]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2074670996", trying btrfs: device or resource busy Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2074670996" Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2074670996" Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem2074670996" Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem2074670996" Feb 8 23:14:29.514286 ignition[979]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(18): [started] processing unit "nvidia.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(18): [finished] processing unit "nvidia.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(19): [started] processing unit "waagent.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(19): [finished] processing unit "waagent.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(1a): [started] processing unit "prepare-helm.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(1a): [finished] processing unit "prepare-helm.service" Feb 8 23:14:29.514286 ignition[979]: INFO : files: op(1c): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:14:29.489260 systemd[1]: mnt-oem2074670996.mount: Deactivated successfully. Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1c): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1e): [started] processing unit "prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(1e): [finished] processing unit "prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(20): [started] setting preset to enabled for "prepare-helm.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(20): [finished] setting preset to enabled for "prepare-helm.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(21): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(21): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(22): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(23): [started] setting preset to enabled for "nvidia.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(23): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(24): [started] setting preset to enabled for "waagent.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: op(24): [finished] setting preset to enabled for "waagent.service" Feb 8 23:14:29.579809 ignition[979]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:14:29.579809 ignition[979]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:14:29.579809 ignition[979]: INFO : files: files passed Feb 8 23:14:29.579809 ignition[979]: INFO : Ignition finished successfully Feb 8 23:14:29.692762 kernel: audit: type=1130 audit(1707434069.609:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.692801 kernel: audit: type=1131 audit(1707434069.609:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.692818 kernel: audit: type=1130 audit(1707434069.634:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.609000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.492256 systemd[1]: Finished ignition-files.service. Feb 8 23:14:29.695033 initrd-setup-root-after-ignition[1002]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:14:29.497749 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:14:29.514231 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:14:29.599556 systemd[1]: Starting ignition-quench.service... Feb 8 23:14:29.603659 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:14:29.603766 systemd[1]: Finished ignition-quench.service. Feb 8 23:14:29.609746 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:14:29.716000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.635137 systemd[1]: Reached target ignition-complete.target. Feb 8 23:14:29.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.694682 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:14:29.742177 kernel: audit: type=1130 audit(1707434069.716:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.743752 kernel: audit: type=1131 audit(1707434069.716:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.713285 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:14:29.713380 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:14:29.717100 systemd[1]: Reached target initrd-fs.target. Feb 8 23:14:29.742156 systemd[1]: Reached target initrd.target. Feb 8 23:14:29.743810 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:14:29.744613 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:14:29.758229 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:14:29.762000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.763654 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:14:29.776287 kernel: audit: type=1130 audit(1707434069.762:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.782421 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:14:29.836045 kernel: audit: type=1131 audit(1707434069.783:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.836079 kernel: audit: type=1131 audit(1707434069.799:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.783000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.799000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.783367 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:14:29.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.783732 systemd[1]: Stopped target timers.target. Feb 8 23:14:29.849614 kernel: audit: type=1131 audit(1707434069.800:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.800000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.801000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.784069 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:14:29.784195 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:14:29.795220 systemd[1]: Stopped target initrd.target. Feb 8 23:14:29.795843 systemd[1]: Stopped target basic.target. Feb 8 23:14:29.796199 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:14:29.796578 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:14:29.797005 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:14:29.797376 systemd[1]: Stopped target remote-fs.target. Feb 8 23:14:29.797855 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:14:29.798214 systemd[1]: Stopped target sysinit.target. Feb 8 23:14:29.798634 systemd[1]: Stopped target local-fs.target. Feb 8 23:14:29.799044 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:14:29.799417 systemd[1]: Stopped target swap.target. Feb 8 23:14:29.799757 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:14:29.799879 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:14:29.800242 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:14:29.800504 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:14:29.800620 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:14:29.800968 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:14:29.801085 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:14:29.801290 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:14:29.863000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.865000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.894722 iscsid[825]: iscsid shutting down. Feb 8 23:14:29.801398 systemd[1]: Stopped ignition-files.service. Feb 8 23:14:29.801675 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:14:29.902000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.801779 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:14:29.850159 systemd[1]: Stopping ignition-mount.service... Feb 8 23:14:29.863178 systemd[1]: Stopping iscsid.service... Feb 8 23:14:29.863496 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:14:29.863639 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:14:29.917903 ignition[1017]: INFO : Ignition 2.14.0 Feb 8 23:14:29.917903 ignition[1017]: INFO : Stage: umount Feb 8 23:14:29.917903 ignition[1017]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:14:29.917903 ignition[1017]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:14:29.918000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.864858 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:14:29.935000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.939445 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:14:29.865013 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:14:29.946004 ignition[1017]: INFO : umount: umount passed Feb 8 23:14:29.946004 ignition[1017]: INFO : Ignition finished successfully Feb 8 23:14:29.865180 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:14:29.954000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.865951 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:14:29.959000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.866086 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:14:29.963000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.906608 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:14:29.966000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.907991 systemd[1]: Stopped iscsid.service. Feb 8 23:14:29.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.920236 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:14:29.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.920729 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:14:29.979000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.938794 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:14:29.944528 systemd[1]: Stopping iscsiuio.service... Feb 8 23:14:29.951375 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:14:29.951501 systemd[1]: Stopped iscsiuio.service. Feb 8 23:14:29.954803 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:14:29.954888 systemd[1]: Stopped ignition-mount.service. Feb 8 23:14:30.004000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.960430 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:14:30.008000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.960550 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:14:29.963827 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:14:29.963872 systemd[1]: Stopped ignition-disks.service. Feb 8 23:14:29.967444 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:14:29.967541 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:14:30.021000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.970676 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:14:29.970723 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:14:30.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.974821 systemd[1]: Stopped target network.target. Feb 8 23:14:29.976625 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:14:30.030000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:14:29.976670 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:14:29.980076 systemd[1]: Stopped target paths.target. Feb 8 23:14:29.981643 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:14:30.043000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.985525 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:14:30.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.991342 systemd[1]: Stopped target slices.target. Feb 8 23:14:30.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.993071 systemd[1]: Stopped target sockets.target. Feb 8 23:14:29.996266 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:14:30.058000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:29.996307 systemd[1]: Closed iscsid.socket. Feb 8 23:14:29.997780 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:14:29.997819 systemd[1]: Closed iscsiuio.socket. Feb 8 23:14:30.001522 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:14:30.001573 systemd[1]: Stopped ignition-setup.service. Feb 8 23:14:30.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.005123 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:14:30.074000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.078000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.005172 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:14:30.009078 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:14:30.084000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.012290 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:14:30.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.016553 systemd-networkd[817]: eth0: DHCPv6 lease lost Feb 8 23:14:30.091000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:14:30.018590 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:14:30.018686 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:14:30.023547 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:14:30.023636 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:14:30.030965 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:14:30.031001 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:14:30.035528 systemd[1]: Stopping network-cleanup.service... Feb 8 23:14:30.039839 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:14:30.039903 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:14:30.044181 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:14:30.044235 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:14:30.047986 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:14:30.048035 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:14:30.049983 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:14:30.055353 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:14:30.055492 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:14:30.062967 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:14:30.063031 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:14:30.066003 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:14:30.066053 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:14:30.069998 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:14:30.144827 kernel: hv_netvsc 000d3ab9-af5b-000d-3ab9-af5b000d3ab9 eth0: Data path switched from VF: enP36896s1 Feb 8 23:14:30.070049 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:14:30.073442 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:14:30.073506 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:14:30.075347 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:14:30.075385 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:14:30.080021 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:14:30.083000 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:14:30.083063 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:14:30.088650 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:14:30.088740 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:14:30.166903 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:14:30.166995 systemd[1]: Stopped network-cleanup.service. Feb 8 23:14:30.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:30.172569 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:14:30.177043 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:14:30.188038 systemd[1]: Switching root. Feb 8 23:14:30.212605 systemd-journald[183]: Journal stopped Feb 8 23:14:44.586662 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:14:44.586699 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:14:44.586713 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:14:44.586721 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:14:44.586729 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:14:44.586738 kernel: SELinux: policy capability open_perms=1 Feb 8 23:14:44.586750 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:14:44.586759 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:14:44.586767 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:14:44.586775 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:14:44.586783 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:14:44.586794 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:14:44.586803 systemd[1]: Successfully loaded SELinux policy in 316.015ms. Feb 8 23:14:44.586816 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.533ms. Feb 8 23:14:44.586831 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:14:44.586843 systemd[1]: Detected virtualization microsoft. Feb 8 23:14:44.586854 systemd[1]: Detected architecture x86-64. Feb 8 23:14:44.586863 systemd[1]: Detected first boot. Feb 8 23:14:44.586878 systemd[1]: Hostname set to . Feb 8 23:14:44.586887 systemd[1]: Initializing machine ID from random generator. Feb 8 23:14:44.586899 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:14:44.586908 kernel: kauditd_printk_skb: 41 callbacks suppressed Feb 8 23:14:44.586920 kernel: audit: type=1400 audit(1707434075.199:89): avc: denied { associate } for pid=1050 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:14:44.586931 kernel: audit: type=1300 audit(1707434075.199:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00018e7d2 a1=c00018aa80 a2=c00019ccc0 a3=32 items=0 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:44.586945 kernel: audit: type=1327 audit(1707434075.199:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:14:44.586957 kernel: audit: type=1400 audit(1707434075.207:90): avc: denied { associate } for pid=1050 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:14:44.586968 kernel: audit: type=1300 audit(1707434075.207:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018e8a9 a2=1ed a3=0 items=2 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:44.586978 kernel: audit: type=1307 audit(1707434075.207:90): cwd="/" Feb 8 23:14:44.586990 kernel: audit: type=1302 audit(1707434075.207:90): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:44.587002 kernel: audit: type=1302 audit(1707434075.207:90): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:44.587015 kernel: audit: type=1327 audit(1707434075.207:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:14:44.587027 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:14:44.587037 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:14:44.587050 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:14:44.587062 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:14:44.587072 kernel: audit: type=1334 audit(1707434084.095:91): prog-id=12 op=LOAD Feb 8 23:14:44.587082 kernel: audit: type=1334 audit(1707434084.095:92): prog-id=3 op=UNLOAD Feb 8 23:14:44.587092 kernel: audit: type=1334 audit(1707434084.101:93): prog-id=13 op=LOAD Feb 8 23:14:44.587106 kernel: audit: type=1334 audit(1707434084.112:94): prog-id=14 op=LOAD Feb 8 23:14:44.587115 kernel: audit: type=1334 audit(1707434084.112:95): prog-id=4 op=UNLOAD Feb 8 23:14:44.587130 kernel: audit: type=1334 audit(1707434084.112:96): prog-id=5 op=UNLOAD Feb 8 23:14:44.587139 kernel: audit: type=1334 audit(1707434084.122:97): prog-id=15 op=LOAD Feb 8 23:14:44.587150 kernel: audit: type=1334 audit(1707434084.122:98): prog-id=12 op=UNLOAD Feb 8 23:14:44.587159 kernel: audit: type=1334 audit(1707434084.127:99): prog-id=16 op=LOAD Feb 8 23:14:44.587171 kernel: audit: type=1334 audit(1707434084.132:100): prog-id=17 op=LOAD Feb 8 23:14:44.587180 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:14:44.587192 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:14:44.587207 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:14:44.587217 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:14:44.587230 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:14:44.587240 systemd[1]: Created slice system-getty.slice. Feb 8 23:14:44.587252 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:14:44.587263 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:14:44.587275 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:14:44.587289 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:14:44.587299 systemd[1]: Created slice user.slice. Feb 8 23:14:44.587311 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:14:44.587321 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:14:44.587333 systemd[1]: Set up automount boot.automount. Feb 8 23:14:44.587346 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:14:44.587358 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:14:44.587370 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:14:44.587381 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:14:44.587396 systemd[1]: Reached target integritysetup.target. Feb 8 23:14:44.587406 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:14:44.587419 systemd[1]: Reached target remote-fs.target. Feb 8 23:14:44.587428 systemd[1]: Reached target slices.target. Feb 8 23:14:44.587440 systemd[1]: Reached target swap.target. Feb 8 23:14:44.587451 systemd[1]: Reached target torcx.target. Feb 8 23:14:44.587462 systemd[1]: Reached target veritysetup.target. Feb 8 23:14:44.587564 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:14:44.587578 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:14:44.587588 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:14:44.587601 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:14:44.587613 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:14:44.587626 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:14:44.587639 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:14:44.587649 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:14:44.587662 systemd[1]: Mounting media.mount... Feb 8 23:14:44.587673 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:14:44.587687 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:14:44.587701 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:14:44.587713 systemd[1]: Mounting tmp.mount... Feb 8 23:14:44.587726 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:14:44.587741 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:14:44.587752 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:14:44.587765 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:14:44.587775 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:14:44.587787 systemd[1]: Starting modprobe@drm.service... Feb 8 23:14:44.587797 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:14:44.587810 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:14:44.587821 systemd[1]: Starting modprobe@loop.service... Feb 8 23:14:44.587832 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:14:44.587847 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:14:44.587857 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:14:44.587870 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:14:44.587880 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:14:44.587892 systemd[1]: Stopped systemd-journald.service. Feb 8 23:14:44.587902 systemd[1]: Starting systemd-journald.service... Feb 8 23:14:44.587912 kernel: loop: module loaded Feb 8 23:14:44.587923 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:14:44.587934 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:14:44.587948 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:14:44.587961 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:14:44.587971 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:14:44.587984 systemd[1]: Stopped verity-setup.service. Feb 8 23:14:44.587994 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:14:44.588006 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:14:44.588018 kernel: fuse: init (API version 7.34) Feb 8 23:14:44.588028 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:14:44.588040 systemd[1]: Mounted media.mount. Feb 8 23:14:44.588052 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:14:44.588064 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:14:44.588074 systemd[1]: Mounted tmp.mount. Feb 8 23:14:44.588086 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:14:44.588100 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:14:44.588118 systemd-journald[1147]: Journal started Feb 8 23:14:44.588171 systemd-journald[1147]: Runtime Journal (/run/log/journal/4682fcfaba5e4e739ea103468ffccddf) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:14:32.850000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:14:33.669000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:14:33.693000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:14:33.693000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:14:33.693000 audit: BPF prog-id=10 op=LOAD Feb 8 23:14:33.693000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:14:33.693000 audit: BPF prog-id=11 op=LOAD Feb 8 23:14:33.693000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:14:35.199000 audit[1050]: AVC avc: denied { associate } for pid=1050 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:14:35.199000 audit[1050]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00018e7d2 a1=c00018aa80 a2=c00019ccc0 a3=32 items=0 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:35.199000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:14:35.207000 audit[1050]: AVC avc: denied { associate } for pid=1050 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:14:35.207000 audit[1050]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00018e8a9 a2=1ed a3=0 items=2 ppid=1033 pid=1050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:35.207000 audit: CWD cwd="/" Feb 8 23:14:35.207000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:35.207000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:35.207000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:14:44.095000 audit: BPF prog-id=12 op=LOAD Feb 8 23:14:44.095000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:14:44.101000 audit: BPF prog-id=13 op=LOAD Feb 8 23:14:44.112000 audit: BPF prog-id=14 op=LOAD Feb 8 23:14:44.112000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:14:44.112000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:14:44.122000 audit: BPF prog-id=15 op=LOAD Feb 8 23:14:44.122000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:14:44.127000 audit: BPF prog-id=16 op=LOAD Feb 8 23:14:44.132000 audit: BPF prog-id=17 op=LOAD Feb 8 23:14:44.132000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:14:44.132000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:14:44.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.150000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:14:44.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.157000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.462000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.477000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.478000 audit: BPF prog-id=18 op=LOAD Feb 8 23:14:44.478000 audit: BPF prog-id=19 op=LOAD Feb 8 23:14:44.478000 audit: BPF prog-id=20 op=LOAD Feb 8 23:14:44.478000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:14:44.478000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:14:44.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.582000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:14:44.582000 audit[1147]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffd1861f130 a2=4000 a3=7ffd1861f1cc items=0 ppid=1 pid=1147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:44.582000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:14:44.588000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:35.112204 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:14:44.095007 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:14:35.139427 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:14:44.133396 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:14:35.139458 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:14:35.139530 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:14:35.139543 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:14:35.139595 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:14:35.139610 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:14:35.139853 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:14:35.139892 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:14:35.139909 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:14:35.182069 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:14:35.182125 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:14:35.182162 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:14:35.182189 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:14:35.182213 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:14:35.182229 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:35Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:14:42.924433 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:14:42.924683 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:14:42.924780 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:14:42.924940 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:14:42.924987 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:14:42.925039 /usr/lib/systemd/system-generators/torcx-generator[1050]: time="2024-02-08T23:14:42Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:14:44.597593 systemd[1]: Started systemd-journald.service. Feb 8 23:14:44.593000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.596000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.596000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.594805 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:14:44.594984 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:14:44.597320 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:14:44.597469 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:14:44.599000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.599799 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:14:44.599937 systemd[1]: Finished modprobe@drm.service. Feb 8 23:14:44.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.601000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.602155 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:14:44.602296 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:14:44.604000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.604680 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:14:44.604819 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:14:44.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.606000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.606987 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:14:44.607120 systemd[1]: Finished modprobe@loop.service. Feb 8 23:14:44.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.609345 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:14:44.611000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.611979 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:14:44.613000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.614363 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:14:44.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.616996 systemd[1]: Reached target network-pre.target. Feb 8 23:14:44.620120 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:14:44.623288 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:14:44.625240 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:14:44.627079 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:14:44.629885 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:14:44.631862 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:14:44.633233 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:14:44.635448 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:14:44.636890 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:14:44.641042 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:14:44.647043 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:14:44.649692 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:14:44.674279 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:14:44.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.676887 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:14:44.682350 systemd-journald[1147]: Time spent on flushing to /var/log/journal/4682fcfaba5e4e739ea103468ffccddf is 20.062ms for 1197 entries. Feb 8 23:14:44.682350 systemd-journald[1147]: System Journal (/var/log/journal/4682fcfaba5e4e739ea103468ffccddf) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:14:44.762746 systemd-journald[1147]: Received client request to flush runtime journal. Feb 8 23:14:44.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.713000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.766000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:44.699141 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:14:44.702709 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:14:44.767274 udevadm[1174]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:14:44.711933 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:14:44.763875 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:14:45.156033 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:14:45.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:46.105399 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:14:46.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:46.107000 audit: BPF prog-id=21 op=LOAD Feb 8 23:14:46.107000 audit: BPF prog-id=22 op=LOAD Feb 8 23:14:46.107000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:14:46.108000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:14:46.109210 systemd[1]: Starting systemd-udevd.service... Feb 8 23:14:46.127962 systemd-udevd[1176]: Using default interface naming scheme 'v252'. Feb 8 23:14:46.459654 systemd[1]: Started systemd-udevd.service. Feb 8 23:14:46.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:46.463000 audit: BPF prog-id=23 op=LOAD Feb 8 23:14:46.464859 systemd[1]: Starting systemd-networkd.service... Feb 8 23:14:46.499292 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:14:46.573504 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:14:46.593267 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:14:46.593375 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:14:46.595573 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:14:46.594000 audit: BPF prog-id=24 op=LOAD Feb 8 23:14:46.594000 audit: BPF prog-id=25 op=LOAD Feb 8 23:14:46.594000 audit: BPF prog-id=26 op=LOAD Feb 8 23:14:46.627599 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:14:46.627717 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:14:46.584000 audit[1187]: AVC avc: denied { confidentiality } for pid=1187 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:14:46.630794 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:14:47.440963 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:14:47.455592 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:14:47.455707 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:14:47.462977 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:14:47.471079 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:14:47.474580 systemd[1]: Started systemd-userdbd.service. Feb 8 23:14:47.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:47.485021 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:14:47.489965 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:14:46.584000 audit[1187]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563b2b171950 a1=f884 a2=7f477dc54bc5 a3=5 items=12 ppid=1176 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:46.584000 audit: CWD cwd="/" Feb 8 23:14:46.584000 audit: PATH item=0 name=(null) inode=1237 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=1 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=2 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=3 name=(null) inode=16000 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=4 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=5 name=(null) inode=16001 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=6 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=7 name=(null) inode=16002 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=8 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=9 name=(null) inode=16003 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=10 name=(null) inode=15999 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PATH item=11 name=(null) inode=16004 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:14:46.584000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:14:47.654971 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:14:47.671031 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1185) Feb 8 23:14:47.716745 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:14:47.799326 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:14:47.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:47.802901 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:14:47.830111 systemd-networkd[1182]: lo: Link UP Feb 8 23:14:47.830121 systemd-networkd[1182]: lo: Gained carrier Feb 8 23:14:47.830707 systemd-networkd[1182]: Enumeration completed Feb 8 23:14:47.830852 systemd[1]: Started systemd-networkd.service. Feb 8 23:14:47.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:47.835048 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:14:47.863103 systemd-networkd[1182]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:14:47.917965 kernel: mlx5_core 9020:00:02.0 enP36896s1: Link up Feb 8 23:14:47.966986 kernel: hv_netvsc 000d3ab9-af5b-000d-3ab9-af5b000d3ab9 eth0: Data path switched to VF: enP36896s1 Feb 8 23:14:47.967869 systemd-networkd[1182]: enP36896s1: Link UP Feb 8 23:14:47.968174 systemd-networkd[1182]: eth0: Link UP Feb 8 23:14:47.968266 systemd-networkd[1182]: eth0: Gained carrier Feb 8 23:14:47.973220 systemd-networkd[1182]: enP36896s1: Gained carrier Feb 8 23:14:48.001084 systemd-networkd[1182]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:14:48.198411 lvm[1252]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:14:48.225037 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:14:48.227000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:48.227754 systemd[1]: Reached target cryptsetup.target. Feb 8 23:14:48.231095 systemd[1]: Starting lvm2-activation.service... Feb 8 23:14:48.235644 lvm[1254]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:14:48.259876 systemd[1]: Finished lvm2-activation.service. Feb 8 23:14:48.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:48.262286 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:14:48.264550 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:14:48.264584 systemd[1]: Reached target local-fs.target. Feb 8 23:14:48.266804 systemd[1]: Reached target machines.target. Feb 8 23:14:48.270056 systemd[1]: Starting ldconfig.service... Feb 8 23:14:48.272029 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:14:48.272136 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:14:48.273259 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:14:48.276336 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:14:48.279704 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:14:48.281985 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:14:48.282077 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:14:48.283161 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:14:48.330160 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1256 (bootctl) Feb 8 23:14:48.331445 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:14:48.406668 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:14:48.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:48.566931 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:14:48.567588 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:14:48.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:48.647739 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:14:48.708754 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:14:48.799574 systemd-tmpfiles[1259]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:14:49.358789 systemd-fsck[1265]: fsck.fat 4.2 (2021-01-31) Feb 8 23:14:49.358789 systemd-fsck[1265]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:14:49.360756 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:14:49.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:49.365304 systemd[1]: Mounting boot.mount... Feb 8 23:14:49.392518 systemd[1]: Mounted boot.mount. Feb 8 23:14:49.406051 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:14:49.407000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:49.416033 systemd-networkd[1182]: eth0: Gained IPv6LL Feb 8 23:14:49.420677 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:14:49.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.155417 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:14:51.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.159229 systemd[1]: Starting audit-rules.service... Feb 8 23:14:51.160250 kernel: kauditd_printk_skb: 78 callbacks suppressed Feb 8 23:14:51.160309 kernel: audit: type=1130 audit(1707434091.157:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.173566 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:14:51.177019 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:14:51.181480 systemd[1]: Starting systemd-resolved.service... Feb 8 23:14:51.191156 kernel: audit: type=1334 audit(1707434091.179:163): prog-id=27 op=LOAD Feb 8 23:14:51.191233 kernel: audit: type=1334 audit(1707434091.186:164): prog-id=28 op=LOAD Feb 8 23:14:51.179000 audit: BPF prog-id=27 op=LOAD Feb 8 23:14:51.186000 audit: BPF prog-id=28 op=LOAD Feb 8 23:14:51.191518 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:14:51.194353 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:14:51.218000 audit[1277]: SYSTEM_BOOT pid=1277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.230144 kernel: audit: type=1127 audit(1707434091.218:165): pid=1277 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.233088 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:14:51.246091 kernel: audit: type=1130 audit(1707434091.234:166): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.298115 kernel: audit: type=1130 audit(1707434091.285:167): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.283546 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:14:51.286180 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:14:51.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.299684 systemd[1]: Reached target time-set.target. Feb 8 23:14:51.311401 kernel: audit: type=1130 audit(1707434091.298:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.311823 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:14:51.381695 systemd-resolved[1275]: Positive Trust Anchors: Feb 8 23:14:51.381715 systemd-resolved[1275]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:14:51.381777 systemd-resolved[1275]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:14:51.438778 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:14:51.441000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.454969 kernel: audit: type=1130 audit(1707434091.441:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.509269 systemd-resolved[1275]: Using system hostname 'ci-3510.3.2-a-56a09d6613'. Feb 8 23:14:51.510882 systemd[1]: Started systemd-resolved.service. Feb 8 23:14:51.512000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.513548 systemd[1]: Reached target network.target. Feb 8 23:14:51.526009 kernel: audit: type=1130 audit(1707434091.512:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:14:51.527639 systemd[1]: Reached target network-online.target. Feb 8 23:14:51.529821 systemd[1]: Reached target nss-lookup.target. Feb 8 23:14:51.570000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:14:51.572089 systemd[1]: Finished audit-rules.service. Feb 8 23:14:51.574223 augenrules[1292]: No rules Feb 8 23:14:51.581975 kernel: audit: type=1305 audit(1707434091.570:171): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:14:51.570000 audit[1292]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffcfa16b250 a2=420 a3=0 items=0 ppid=1271 pid=1292 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:14:51.570000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:14:56.617886 ldconfig[1255]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:14:56.625693 systemd[1]: Finished ldconfig.service. Feb 8 23:14:56.629492 systemd[1]: Starting systemd-update-done.service... Feb 8 23:14:56.651464 systemd[1]: Finished systemd-update-done.service. Feb 8 23:14:56.654148 systemd[1]: Reached target sysinit.target. Feb 8 23:14:56.656355 systemd[1]: Started motdgen.path. Feb 8 23:14:56.658297 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:14:56.661204 systemd[1]: Started logrotate.timer. Feb 8 23:14:56.662890 systemd[1]: Started mdadm.timer. Feb 8 23:14:56.664369 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:14:56.666257 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:14:56.666299 systemd[1]: Reached target paths.target. Feb 8 23:14:56.668070 systemd[1]: Reached target timers.target. Feb 8 23:14:56.670985 systemd[1]: Listening on dbus.socket. Feb 8 23:14:56.673671 systemd[1]: Starting docker.socket... Feb 8 23:14:56.677430 systemd[1]: Listening on sshd.socket. Feb 8 23:14:56.679342 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:14:56.679787 systemd[1]: Listening on docker.socket. Feb 8 23:14:56.682925 systemd[1]: Reached target sockets.target. Feb 8 23:14:56.685096 systemd[1]: Reached target basic.target. Feb 8 23:14:56.686971 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:14:56.687004 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:14:56.688021 systemd[1]: Starting containerd.service... Feb 8 23:14:56.690850 systemd[1]: Starting dbus.service... Feb 8 23:14:56.693372 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:14:56.696331 systemd[1]: Starting extend-filesystems.service... Feb 8 23:14:56.698514 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:14:56.699807 systemd[1]: Starting motdgen.service... Feb 8 23:14:56.703729 systemd[1]: Started nvidia.service. Feb 8 23:14:56.707675 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:14:56.711458 systemd[1]: Starting prepare-critools.service... Feb 8 23:14:56.714238 systemd[1]: Starting prepare-helm.service... Feb 8 23:14:56.717319 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:14:56.720998 systemd[1]: Starting sshd-keygen.service... Feb 8 23:14:56.727442 systemd[1]: Starting systemd-logind.service... Feb 8 23:14:56.730134 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:14:56.730213 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:14:56.730732 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:14:56.731570 systemd[1]: Starting update-engine.service... Feb 8 23:14:56.734491 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:14:56.744370 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:14:56.744819 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:14:56.801092 jq[1302]: false Feb 8 23:14:56.804504 jq[1319]: true Feb 8 23:14:56.806985 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:14:56.807260 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:14:56.811537 extend-filesystems[1303]: Found sda Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda1 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda2 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda3 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found usr Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda4 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda6 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda7 Feb 8 23:14:56.813632 extend-filesystems[1303]: Found sda9 Feb 8 23:14:56.813632 extend-filesystems[1303]: Checking size of /dev/sda9 Feb 8 23:14:56.838250 jq[1332]: true Feb 8 23:14:56.815094 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:14:56.815257 systemd[1]: Finished motdgen.service. Feb 8 23:14:56.866438 env[1331]: time="2024-02-08T23:14:56.866395200Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:14:56.911811 env[1331]: time="2024-02-08T23:14:56.911717400Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:14:56.912094 env[1331]: time="2024-02-08T23:14:56.912073400Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.916090 tar[1322]: ./ Feb 8 23:14:56.916090 tar[1322]: ./loopback Feb 8 23:14:56.917665 env[1331]: time="2024-02-08T23:14:56.917619900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:14:56.917788 env[1331]: time="2024-02-08T23:14:56.917769000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.918447 env[1331]: time="2024-02-08T23:14:56.918419500Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:14:56.918563 env[1331]: time="2024-02-08T23:14:56.918545300Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.918644 env[1331]: time="2024-02-08T23:14:56.918626100Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:14:56.918716 env[1331]: time="2024-02-08T23:14:56.918701000Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.918886 env[1331]: time="2024-02-08T23:14:56.918866100Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.920520 env[1331]: time="2024-02-08T23:14:56.920498500Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:14:56.920995 tar[1324]: linux-amd64/helm Feb 8 23:14:56.925756 tar[1323]: crictl Feb 8 23:14:56.938791 extend-filesystems[1303]: Old size kept for /dev/sda9 Feb 8 23:14:56.944790 extend-filesystems[1303]: Found sr0 Feb 8 23:14:56.947592 systemd-logind[1316]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:14:56.948707 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:14:56.948898 systemd[1]: Finished extend-filesystems.service. Feb 8 23:14:56.951722 systemd-logind[1316]: New seat seat0. Feb 8 23:14:56.957306 env[1331]: time="2024-02-08T23:14:56.957264900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:14:56.959015 env[1331]: time="2024-02-08T23:14:56.958979600Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:14:56.959290 env[1331]: time="2024-02-08T23:14:56.959267700Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:14:56.959417 env[1331]: time="2024-02-08T23:14:56.959398300Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970688200Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970731100Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970750200Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970806200Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970827700Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970861500Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970879100Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970897600Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970915600Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970934000Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970967000Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.970984100Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.971102000Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:14:56.972918 env[1331]: time="2024-02-08T23:14:56.971200900Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971504500Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971540400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971559800Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971613200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971630800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971647500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971665400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971683400Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971698500Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971712000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971725700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971741700Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971878100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971897100Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973455 env[1331]: time="2024-02-08T23:14:56.971914200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.973991 env[1331]: time="2024-02-08T23:14:56.971931900Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:14:56.973991 env[1331]: time="2024-02-08T23:14:56.971964500Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:14:56.973991 env[1331]: time="2024-02-08T23:14:56.971980000Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:14:56.973991 env[1331]: time="2024-02-08T23:14:56.972004500Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:14:56.973991 env[1331]: time="2024-02-08T23:14:56.972051800Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:14:56.974176 env[1331]: time="2024-02-08T23:14:56.972314200Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:14:56.974176 env[1331]: time="2024-02-08T23:14:56.972387900Z" level=info msg="Connect containerd service" Feb 8 23:14:56.974176 env[1331]: time="2024-02-08T23:14:56.972437200Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.974627700Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975553100Z" level=info msg="Start subscribing containerd event" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975632300Z" level=info msg="Start recovering state" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975700200Z" level=info msg="Start event monitor" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975713400Z" level=info msg="Start snapshots syncer" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975727000Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.975738500Z" level=info msg="Start streaming server" Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.976238400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:56.976339100Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:14:57.011138 env[1331]: time="2024-02-08T23:14:57.000410100Z" level=info msg="containerd successfully booted in 0.137504s" Feb 8 23:14:57.011494 bash[1351]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:14:56.976481 systemd[1]: Started containerd.service. Feb 8 23:14:56.994986 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:14:57.042287 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:14:57.050894 dbus-daemon[1301]: [system] SELinux support is enabled Feb 8 23:14:57.051096 systemd[1]: Started dbus.service. Feb 8 23:14:57.055576 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:14:57.055614 systemd[1]: Reached target system-config.target. Feb 8 23:14:57.057788 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:14:57.057810 systemd[1]: Reached target user-config.target. Feb 8 23:14:57.060388 systemd[1]: Started systemd-logind.service. Feb 8 23:14:57.061670 dbus-daemon[1301]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:14:57.081089 tar[1322]: ./bandwidth Feb 8 23:14:57.204958 tar[1322]: ./ptp Feb 8 23:14:57.339071 tar[1322]: ./vlan Feb 8 23:14:57.440339 tar[1322]: ./host-device Feb 8 23:14:57.548063 tar[1322]: ./tuning Feb 8 23:14:57.594250 tar[1322]: ./vrf Feb 8 23:14:57.669799 tar[1322]: ./sbr Feb 8 23:14:57.752932 tar[1322]: ./tap Feb 8 23:14:57.825958 update_engine[1318]: I0208 23:14:57.825176 1318 main.cc:92] Flatcar Update Engine starting Feb 8 23:14:57.847153 tar[1322]: ./dhcp Feb 8 23:14:57.882687 systemd[1]: Started update-engine.service. Feb 8 23:14:57.884522 update_engine[1318]: I0208 23:14:57.884418 1318 update_check_scheduler.cc:74] Next update check in 4m23s Feb 8 23:14:57.887577 systemd[1]: Started locksmithd.service. Feb 8 23:14:58.078588 tar[1324]: linux-amd64/LICENSE Feb 8 23:14:58.078977 tar[1324]: linux-amd64/README.md Feb 8 23:14:58.087742 tar[1322]: ./static Feb 8 23:14:58.087788 systemd[1]: Finished prepare-helm.service. Feb 8 23:14:58.126517 tar[1322]: ./firewall Feb 8 23:14:58.127399 systemd[1]: Finished prepare-critools.service. Feb 8 23:14:58.178809 tar[1322]: ./macvlan Feb 8 23:14:58.223786 tar[1322]: ./dummy Feb 8 23:14:58.267525 tar[1322]: ./bridge Feb 8 23:14:58.315412 tar[1322]: ./ipvlan Feb 8 23:14:58.359759 tar[1322]: ./portmap Feb 8 23:14:58.401555 tar[1322]: ./host-local Feb 8 23:14:58.494894 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:14:59.129715 sshd_keygen[1325]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:14:59.149933 systemd[1]: Finished sshd-keygen.service. Feb 8 23:14:59.154160 systemd[1]: Starting issuegen.service... Feb 8 23:14:59.157528 systemd[1]: Started waagent.service. Feb 8 23:14:59.161119 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:14:59.161310 systemd[1]: Finished issuegen.service. Feb 8 23:14:59.164859 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:14:59.172125 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:14:59.175904 systemd[1]: Started getty@tty1.service. Feb 8 23:14:59.179330 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:14:59.181620 systemd[1]: Reached target getty.target. Feb 8 23:14:59.183472 systemd[1]: Reached target multi-user.target. Feb 8 23:14:59.186758 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:14:59.195767 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:14:59.195961 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:14:59.198410 systemd[1]: Startup finished in 1.042s (firmware) + 29.573s (loader) + 868ms (kernel) + 1min 35.531s (initrd) + 26.157s (userspace) = 2min 33.173s. Feb 8 23:14:59.614131 login[1431]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:14:59.615428 login[1432]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:14:59.639028 systemd[1]: Created slice user-500.slice. Feb 8 23:14:59.641489 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:14:59.645115 systemd-logind[1316]: New session 2 of user core. Feb 8 23:14:59.648729 systemd-logind[1316]: New session 1 of user core. Feb 8 23:14:59.652520 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:14:59.654170 systemd[1]: Starting user@500.service... Feb 8 23:14:59.657508 (systemd)[1436]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:14:59.780648 locksmithd[1409]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:14:59.849880 systemd[1436]: Queued start job for default target default.target. Feb 8 23:14:59.850600 systemd[1436]: Reached target paths.target. Feb 8 23:14:59.850636 systemd[1436]: Reached target sockets.target. Feb 8 23:14:59.850658 systemd[1436]: Reached target timers.target. Feb 8 23:14:59.850677 systemd[1436]: Reached target basic.target. Feb 8 23:14:59.850746 systemd[1436]: Reached target default.target. Feb 8 23:14:59.850792 systemd[1436]: Startup finished in 186ms. Feb 8 23:14:59.850831 systemd[1]: Started user@500.service. Feb 8 23:14:59.852247 systemd[1]: Started session-1.scope. Feb 8 23:14:59.853067 systemd[1]: Started session-2.scope. Feb 8 23:15:01.591779 systemd-timesyncd[1276]: Timed out waiting for reply from 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 8 23:15:01.614212 systemd-timesyncd[1276]: Contacted time server 77.68.25.145:123 (0.flatcar.pool.ntp.org). Feb 8 23:15:01.614459 systemd-timesyncd[1276]: Initial clock synchronization to Thu 2024-02-08 23:15:01.618867 UTC. Feb 8 23:15:05.957925 waagent[1426]: 2024-02-08T23:15:05.957808Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:15:05.970892 waagent[1426]: 2024-02-08T23:15:05.960104Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:15:05.970892 waagent[1426]: 2024-02-08T23:15:05.961079Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:15:05.970892 waagent[1426]: 2024-02-08T23:15:05.962287Z INFO Daemon Daemon Run daemon Feb 8 23:15:05.970892 waagent[1426]: 2024-02-08T23:15:05.963240Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:15:05.974746 waagent[1426]: 2024-02-08T23:15:05.974628Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:15:05.982634 waagent[1426]: 2024-02-08T23:15:05.982524Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:15:05.987127 waagent[1426]: 2024-02-08T23:15:05.987062Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:05.988047Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:05.989393Z INFO Daemon Daemon Activate resource disk Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:05.990559Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:05.998785Z INFO Daemon Daemon Found device: None Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.000009Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.000736Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.002363Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.003206Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.012881Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.016094Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.017147Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:15:06.026798 waagent[1426]: 2024-02-08T23:15:06.017842Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:15:06.107278 waagent[1426]: 2024-02-08T23:15:06.105703Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:15:06.226061 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:15:06.230674 waagent[1426]: 2024-02-08T23:15:06.230550Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:15:06.233586 waagent[1426]: 2024-02-08T23:15:06.233513Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:15:06.236333 waagent[1426]: 2024-02-08T23:15:06.236271Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:15:06.239337 waagent[1426]: 2024-02-08T23:15:06.239278Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:15:06.241955 waagent[1426]: 2024-02-08T23:15:06.241887Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:15:06.244355 waagent[1426]: 2024-02-08T23:15:06.244296Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:15:06.380296 waagent[1426]: 2024-02-08T23:15:06.380217Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:15:06.384010 waagent[1426]: 2024-02-08T23:15:06.383938Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:15:06.386957 waagent[1426]: 2024-02-08T23:15:06.386870Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:15:06.814972 waagent[1426]: 2024-02-08T23:15:06.814813Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:15:06.826691 waagent[1426]: 2024-02-08T23:15:06.826608Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:15:06.830990 waagent[1426]: 2024-02-08T23:15:06.830906Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:15:06.920935 waagent[1426]: 2024-02-08T23:15:06.920803Z INFO Daemon Daemon Found private key matching thumbprint 0253D575FAF0B6170285ED3B700DFEE372FBE2CF Feb 8 23:15:06.925700 waagent[1426]: 2024-02-08T23:15:06.925620Z INFO Daemon Daemon Certificate with thumbprint CFA693E162E815DB8619DF401E741ADE386F7E06 has no matching private key. Feb 8 23:15:06.930217 waagent[1426]: 2024-02-08T23:15:06.930151Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:15:06.984306 waagent[1426]: 2024-02-08T23:15:06.984218Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: db25285e-4a00-4d57-a3f2-3ac612fb193f New eTag: 18136309256200228815] Feb 8 23:15:06.990492 waagent[1426]: 2024-02-08T23:15:06.990411Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:15:07.003093 waagent[1426]: 2024-02-08T23:15:07.003028Z INFO Daemon Daemon Starting provisioning Feb 8 23:15:07.005448 waagent[1426]: 2024-02-08T23:15:07.005380Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:15:07.007666 waagent[1426]: 2024-02-08T23:15:07.007607Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-56a09d6613] Feb 8 23:15:07.032000 waagent[1426]: 2024-02-08T23:15:07.031854Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-56a09d6613] Feb 8 23:15:07.035440 waagent[1426]: 2024-02-08T23:15:07.035345Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:15:07.038759 waagent[1426]: 2024-02-08T23:15:07.038675Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:15:07.053146 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:15:07.053415 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:15:07.053490 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:15:07.053856 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:15:07.057988 systemd-networkd[1182]: eth0: DHCPv6 lease lost Feb 8 23:15:07.231463 waagent[1426]: 2024-02-08T23:15:07.098557Z INFO Daemon Daemon Create user account if not exists Feb 8 23:15:07.059305 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:15:07.059459 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:15:07.061749 systemd[1]: Starting systemd-networkd.service... Feb 8 23:15:07.092777 systemd-networkd[1484]: enP36896s1: Link UP Feb 8 23:15:07.092781 systemd-networkd[1484]: enP36896s1: Gained carrier Feb 8 23:15:07.094042 systemd-networkd[1484]: eth0: Link UP Feb 8 23:15:07.094047 systemd-networkd[1484]: eth0: Gained carrier Feb 8 23:15:07.094425 systemd-networkd[1484]: lo: Link UP Feb 8 23:15:07.094429 systemd-networkd[1484]: lo: Gained carrier Feb 8 23:15:07.094686 systemd-networkd[1484]: eth0: Gained IPv6LL Feb 8 23:15:07.095288 systemd-networkd[1484]: Enumeration completed Feb 8 23:15:07.095402 systemd[1]: Started systemd-networkd.service. Feb 8 23:15:07.097785 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:15:07.100371 systemd-networkd[1484]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:15:07.137026 systemd-networkd[1484]: eth0: DHCPv4 address 10.200.8.40/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:15:07.141220 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:15:07.234204 waagent[1426]: 2024-02-08T23:15:07.234092Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:15:07.237139 waagent[1426]: 2024-02-08T23:15:07.237064Z INFO Daemon Daemon Configure sudoer Feb 8 23:15:07.337983 waagent[1426]: 2024-02-08T23:15:07.337801Z INFO Daemon Daemon Configure sshd Feb 8 23:15:07.340660 waagent[1426]: 2024-02-08T23:15:07.340568Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:15:08.626867 waagent[1426]: 2024-02-08T23:15:08.626772Z INFO Daemon Daemon Provisioning complete Feb 8 23:15:08.643418 waagent[1426]: 2024-02-08T23:15:08.643336Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:15:08.650597 waagent[1426]: 2024-02-08T23:15:08.644646Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:15:08.650597 waagent[1426]: 2024-02-08T23:15:08.646277Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:15:08.911997 waagent[1493]: 2024-02-08T23:15:08.911813Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:15:08.912708 waagent[1493]: 2024-02-08T23:15:08.912640Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:08.912855 waagent[1493]: 2024-02-08T23:15:08.912800Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:08.923746 waagent[1493]: 2024-02-08T23:15:08.923670Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:15:08.923915 waagent[1493]: 2024-02-08T23:15:08.923858Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:15:08.984491 waagent[1493]: 2024-02-08T23:15:08.984362Z INFO ExtHandler ExtHandler Found private key matching thumbprint 0253D575FAF0B6170285ED3B700DFEE372FBE2CF Feb 8 23:15:08.984725 waagent[1493]: 2024-02-08T23:15:08.984658Z INFO ExtHandler ExtHandler Certificate with thumbprint CFA693E162E815DB8619DF401E741ADE386F7E06 has no matching private key. Feb 8 23:15:08.984977 waagent[1493]: 2024-02-08T23:15:08.984912Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:15:08.998563 waagent[1493]: 2024-02-08T23:15:08.998499Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 84343443-54bd-4eed-b651-bbab81a63355 New eTag: 18136309256200228815] Feb 8 23:15:08.999159 waagent[1493]: 2024-02-08T23:15:08.999100Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:15:09.085145 waagent[1493]: 2024-02-08T23:15:09.084980Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:15:09.112817 waagent[1493]: 2024-02-08T23:15:09.112705Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1493 Feb 8 23:15:09.116527 waagent[1493]: 2024-02-08T23:15:09.116459Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:15:09.117760 waagent[1493]: 2024-02-08T23:15:09.117699Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:15:09.226954 waagent[1493]: 2024-02-08T23:15:09.226814Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:15:09.227324 waagent[1493]: 2024-02-08T23:15:09.227254Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:15:09.235272 waagent[1493]: 2024-02-08T23:15:09.235213Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:15:09.235744 waagent[1493]: 2024-02-08T23:15:09.235686Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:15:09.236824 waagent[1493]: 2024-02-08T23:15:09.236760Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:15:09.238130 waagent[1493]: 2024-02-08T23:15:09.238069Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:15:09.238733 waagent[1493]: 2024-02-08T23:15:09.238663Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:15:09.239132 waagent[1493]: 2024-02-08T23:15:09.239076Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:09.239614 waagent[1493]: 2024-02-08T23:15:09.239554Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:15:09.239822 waagent[1493]: 2024-02-08T23:15:09.239770Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:09.240087 waagent[1493]: 2024-02-08T23:15:09.240036Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:09.240226 waagent[1493]: 2024-02-08T23:15:09.240173Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:15:09.241030 waagent[1493]: 2024-02-08T23:15:09.240932Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:15:09.241177 waagent[1493]: 2024-02-08T23:15:09.241125Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:09.241432 waagent[1493]: 2024-02-08T23:15:09.241378Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:15:09.241841 waagent[1493]: 2024-02-08T23:15:09.241785Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:15:09.242472 waagent[1493]: 2024-02-08T23:15:09.242416Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:15:09.242612 waagent[1493]: 2024-02-08T23:15:09.242542Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:15:09.242920 waagent[1493]: 2024-02-08T23:15:09.242854Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:15:09.244631 waagent[1493]: 2024-02-08T23:15:09.244573Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:15:09.244631 waagent[1493]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:15:09.244631 waagent[1493]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:15:09.244631 waagent[1493]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:15:09.244631 waagent[1493]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:09.244631 waagent[1493]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:09.244631 waagent[1493]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:09.245153 waagent[1493]: 2024-02-08T23:15:09.245056Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:15:09.259261 waagent[1493]: 2024-02-08T23:15:09.259174Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:15:09.260974 waagent[1493]: 2024-02-08T23:15:09.260904Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:15:09.263182 waagent[1493]: 2024-02-08T23:15:09.263119Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:15:09.306805 waagent[1493]: 2024-02-08T23:15:09.306710Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:15:09.325753 waagent[1493]: 2024-02-08T23:15:09.325637Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1484' Feb 8 23:15:09.433412 waagent[1493]: 2024-02-08T23:15:09.433332Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:15:09.433412 waagent[1493]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:15:09.433412 waagent[1493]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:15:09.433412 waagent[1493]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:af:5b brd ff:ff:ff:ff:ff:ff Feb 8 23:15:09.433412 waagent[1493]: 3: enP36896s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:af:5b brd ff:ff:ff:ff:ff:ff\ altname enP36896p0s2 Feb 8 23:15:09.433412 waagent[1493]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:15:09.433412 waagent[1493]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:15:09.433412 waagent[1493]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:15:09.433412 waagent[1493]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:15:09.433412 waagent[1493]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:15:09.433412 waagent[1493]: 2: eth0 inet6 fe80::20d:3aff:feb9:af5b/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:15:09.601785 waagent[1493]: 2024-02-08T23:15:09.601718Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:15:09.650065 waagent[1426]: 2024-02-08T23:15:09.649925Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:15:09.655074 waagent[1426]: 2024-02-08T23:15:09.655015Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:15:10.671430 waagent[1524]: 2024-02-08T23:15:10.671310Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:15:10.672167 waagent[1524]: 2024-02-08T23:15:10.672093Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:15:10.672316 waagent[1524]: 2024-02-08T23:15:10.672259Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:15:10.681863 waagent[1524]: 2024-02-08T23:15:10.681753Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:15:10.682264 waagent[1524]: 2024-02-08T23:15:10.682204Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:10.682431 waagent[1524]: 2024-02-08T23:15:10.682380Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:10.693699 waagent[1524]: 2024-02-08T23:15:10.693626Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:15:10.702250 waagent[1524]: 2024-02-08T23:15:10.702189Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:15:10.703167 waagent[1524]: 2024-02-08T23:15:10.703108Z INFO ExtHandler Feb 8 23:15:10.703319 waagent[1524]: 2024-02-08T23:15:10.703269Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 38194836-3576-4366-8f08-b3eda8fe94dd eTag: 18136309256200228815 source: Fabric] Feb 8 23:15:10.704037 waagent[1524]: 2024-02-08T23:15:10.703981Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:15:10.705116 waagent[1524]: 2024-02-08T23:15:10.705056Z INFO ExtHandler Feb 8 23:15:10.705251 waagent[1524]: 2024-02-08T23:15:10.705200Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:15:10.711750 waagent[1524]: 2024-02-08T23:15:10.711698Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:15:10.712199 waagent[1524]: 2024-02-08T23:15:10.712147Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:15:10.732162 waagent[1524]: 2024-02-08T23:15:10.732103Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:15:10.797343 waagent[1524]: 2024-02-08T23:15:10.797211Z INFO ExtHandler Downloaded certificate {'thumbprint': 'CFA693E162E815DB8619DF401E741ADE386F7E06', 'hasPrivateKey': False} Feb 8 23:15:10.798352 waagent[1524]: 2024-02-08T23:15:10.798282Z INFO ExtHandler Downloaded certificate {'thumbprint': '0253D575FAF0B6170285ED3B700DFEE372FBE2CF', 'hasPrivateKey': True} Feb 8 23:15:10.799346 waagent[1524]: 2024-02-08T23:15:10.799285Z INFO ExtHandler Fetch goal state completed Feb 8 23:15:10.821782 waagent[1524]: 2024-02-08T23:15:10.821705Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1524 Feb 8 23:15:10.825040 waagent[1524]: 2024-02-08T23:15:10.824972Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:15:10.826474 waagent[1524]: 2024-02-08T23:15:10.826417Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:15:10.831485 waagent[1524]: 2024-02-08T23:15:10.831428Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:15:10.831849 waagent[1524]: 2024-02-08T23:15:10.831792Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:15:10.840003 waagent[1524]: 2024-02-08T23:15:10.839933Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:15:10.840471 waagent[1524]: 2024-02-08T23:15:10.840413Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:15:10.846389 waagent[1524]: 2024-02-08T23:15:10.846287Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:15:10.851002 waagent[1524]: 2024-02-08T23:15:10.850928Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:15:10.852389 waagent[1524]: 2024-02-08T23:15:10.852329Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:15:10.852826 waagent[1524]: 2024-02-08T23:15:10.852769Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:10.853004 waagent[1524]: 2024-02-08T23:15:10.852937Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:10.853530 waagent[1524]: 2024-02-08T23:15:10.853471Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:15:10.853803 waagent[1524]: 2024-02-08T23:15:10.853749Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:15:10.853803 waagent[1524]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:15:10.853803 waagent[1524]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:15:10.853803 waagent[1524]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:15:10.853803 waagent[1524]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:10.853803 waagent[1524]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:10.853803 waagent[1524]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:15:10.856085 waagent[1524]: 2024-02-08T23:15:10.855937Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:15:10.856828 waagent[1524]: 2024-02-08T23:15:10.856765Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:15:10.857151 waagent[1524]: 2024-02-08T23:15:10.857093Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:15:10.857357 waagent[1524]: 2024-02-08T23:15:10.857292Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:15:10.860394 waagent[1524]: 2024-02-08T23:15:10.860147Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:15:10.861145 waagent[1524]: 2024-02-08T23:15:10.861071Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:15:10.861494 waagent[1524]: 2024-02-08T23:15:10.861431Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:15:10.861660 waagent[1524]: 2024-02-08T23:15:10.861602Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:15:10.864771 waagent[1524]: 2024-02-08T23:15:10.864640Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:15:10.865072 waagent[1524]: 2024-02-08T23:15:10.865007Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:15:10.869564 waagent[1524]: 2024-02-08T23:15:10.869484Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:15:10.880772 waagent[1524]: 2024-02-08T23:15:10.880706Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:15:10.880772 waagent[1524]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:15:10.880772 waagent[1524]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:15:10.880772 waagent[1524]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:af:5b brd ff:ff:ff:ff:ff:ff Feb 8 23:15:10.880772 waagent[1524]: 3: enP36896s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:b9:af:5b brd ff:ff:ff:ff:ff:ff\ altname enP36896p0s2 Feb 8 23:15:10.880772 waagent[1524]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:15:10.880772 waagent[1524]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:15:10.880772 waagent[1524]: 2: eth0 inet 10.200.8.40/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:15:10.880772 waagent[1524]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:15:10.880772 waagent[1524]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:15:10.880772 waagent[1524]: 2: eth0 inet6 fe80::20d:3aff:feb9:af5b/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:15:10.886721 waagent[1524]: 2024-02-08T23:15:10.886639Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:15:10.890557 waagent[1524]: 2024-02-08T23:15:10.890444Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:15:10.958351 waagent[1524]: 2024-02-08T23:15:10.958232Z INFO ExtHandler ExtHandler Feb 8 23:15:10.958494 waagent[1524]: 2024-02-08T23:15:10.958404Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 55c7c481-fede-458e-a6ba-534d7b530738 correlation ae373f88-d68c-4cd8-a10f-618f9b9de740 created: 2024-02-08T23:12:16.491470Z] Feb 8 23:15:10.959350 waagent[1524]: 2024-02-08T23:15:10.959284Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:15:10.961108 waagent[1524]: 2024-02-08T23:15:10.961050Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 8 23:15:10.981921 waagent[1524]: 2024-02-08T23:15:10.981849Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:15:10.994304 waagent[1524]: 2024-02-08T23:15:10.994212Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 233C63C7-0317-4579-8075-3C02F9AAC39D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:15:11.049071 waagent[1524]: 2024-02-08T23:15:11.048926Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:15:11.049071 waagent[1524]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.049071 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.049071 waagent[1524]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.049071 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.049071 waagent[1524]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.049071 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.049071 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:15:11.049071 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:15:11.049071 waagent[1524]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:15:11.056292 waagent[1524]: 2024-02-08T23:15:11.056178Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:15:11.056292 waagent[1524]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.056292 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.056292 waagent[1524]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.056292 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.056292 waagent[1524]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:15:11.056292 waagent[1524]: pkts bytes target prot opt in out source destination Feb 8 23:15:11.056292 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:15:11.056292 waagent[1524]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:15:11.056292 waagent[1524]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:15:11.056851 waagent[1524]: 2024-02-08T23:15:11.056794Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:15:35.602982 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:15:42.665181 update_engine[1318]: I0208 23:15:42.665117 1318 update_attempter.cc:509] Updating boot flags... Feb 8 23:15:54.235169 systemd[1]: Created slice system-sshd.slice. Feb 8 23:15:54.236998 systemd[1]: Started sshd@0-10.200.8.40:22-10.200.12.6:48214.service. Feb 8 23:15:55.080598 sshd[1641]: Accepted publickey for core from 10.200.12.6 port 48214 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:15:55.082325 sshd[1641]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:15:55.086980 systemd-logind[1316]: New session 3 of user core. Feb 8 23:15:55.087905 systemd[1]: Started session-3.scope. Feb 8 23:15:55.617659 systemd[1]: Started sshd@1-10.200.8.40:22-10.200.12.6:48218.service. Feb 8 23:15:56.238287 sshd[1646]: Accepted publickey for core from 10.200.12.6 port 48218 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:15:56.239964 sshd[1646]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:15:56.245557 systemd-logind[1316]: New session 4 of user core. Feb 8 23:15:56.246138 systemd[1]: Started session-4.scope. Feb 8 23:15:56.677698 sshd[1646]: pam_unix(sshd:session): session closed for user core Feb 8 23:15:56.681027 systemd[1]: sshd@1-10.200.8.40:22-10.200.12.6:48218.service: Deactivated successfully. Feb 8 23:15:56.682047 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:15:56.682821 systemd-logind[1316]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:15:56.683740 systemd-logind[1316]: Removed session 4. Feb 8 23:15:56.781175 systemd[1]: Started sshd@2-10.200.8.40:22-10.200.12.6:48228.service. Feb 8 23:15:57.398224 sshd[1652]: Accepted publickey for core from 10.200.12.6 port 48228 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:15:57.399879 sshd[1652]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:15:57.405064 systemd[1]: Started session-5.scope. Feb 8 23:15:57.405722 systemd-logind[1316]: New session 5 of user core. Feb 8 23:15:57.830818 sshd[1652]: pam_unix(sshd:session): session closed for user core Feb 8 23:15:57.834312 systemd[1]: sshd@2-10.200.8.40:22-10.200.12.6:48228.service: Deactivated successfully. Feb 8 23:15:57.835163 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:15:57.835752 systemd-logind[1316]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:15:57.836511 systemd-logind[1316]: Removed session 5. Feb 8 23:15:57.936006 systemd[1]: Started sshd@3-10.200.8.40:22-10.200.12.6:54890.service. Feb 8 23:15:58.555546 sshd[1658]: Accepted publickey for core from 10.200.12.6 port 54890 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:15:58.559491 sshd[1658]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:15:58.564204 systemd[1]: Started session-6.scope. Feb 8 23:15:58.564643 systemd-logind[1316]: New session 6 of user core. Feb 8 23:15:58.995562 sshd[1658]: pam_unix(sshd:session): session closed for user core Feb 8 23:15:58.998612 systemd[1]: sshd@3-10.200.8.40:22-10.200.12.6:54890.service: Deactivated successfully. Feb 8 23:15:58.999451 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:15:59.000112 systemd-logind[1316]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:15:59.000849 systemd-logind[1316]: Removed session 6. Feb 8 23:15:59.100986 systemd[1]: Started sshd@4-10.200.8.40:22-10.200.12.6:54894.service. Feb 8 23:15:59.734439 sshd[1664]: Accepted publickey for core from 10.200.12.6 port 54894 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:15:59.736146 sshd[1664]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:15:59.741818 systemd[1]: Started session-7.scope. Feb 8 23:15:59.742582 systemd-logind[1316]: New session 7 of user core. Feb 8 23:16:00.386625 sudo[1667]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:16:00.386996 sudo[1667]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:16:01.465323 systemd[1]: Starting docker.service... Feb 8 23:16:01.521241 env[1682]: time="2024-02-08T23:16:01.521176794Z" level=info msg="Starting up" Feb 8 23:16:01.522580 env[1682]: time="2024-02-08T23:16:01.522555303Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:16:01.522710 env[1682]: time="2024-02-08T23:16:01.522697404Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:16:01.522774 env[1682]: time="2024-02-08T23:16:01.522762105Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:16:01.522816 env[1682]: time="2024-02-08T23:16:01.522808005Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:16:01.524683 env[1682]: time="2024-02-08T23:16:01.524654317Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 8 23:16:01.524771 env[1682]: time="2024-02-08T23:16:01.524761318Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 8 23:16:01.524825 env[1682]: time="2024-02-08T23:16:01.524814318Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 8 23:16:01.524869 env[1682]: time="2024-02-08T23:16:01.524861019Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 8 23:16:01.617625 env[1682]: time="2024-02-08T23:16:01.617573245Z" level=info msg="Loading containers: start." Feb 8 23:16:01.735968 kernel: Initializing XFRM netlink socket Feb 8 23:16:01.761819 env[1682]: time="2024-02-08T23:16:01.761777919Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 8 23:16:01.877600 systemd-networkd[1484]: docker0: Link UP Feb 8 23:16:01.893801 env[1682]: time="2024-02-08T23:16:01.893760810Z" level=info msg="Loading containers: done." Feb 8 23:16:01.904967 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck2672365957-merged.mount: Deactivated successfully. Feb 8 23:16:01.908418 env[1682]: time="2024-02-08T23:16:01.908380009Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 8 23:16:01.908605 env[1682]: time="2024-02-08T23:16:01.908580311Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 8 23:16:01.908708 env[1682]: time="2024-02-08T23:16:01.908688511Z" level=info msg="Daemon has completed initialization" Feb 8 23:16:01.935609 systemd[1]: Started docker.service. Feb 8 23:16:01.945095 env[1682]: time="2024-02-08T23:16:01.945044357Z" level=info msg="API listen on /run/docker.sock" Feb 8 23:16:01.961927 systemd[1]: Reloading. Feb 8 23:16:02.037068 /usr/lib/systemd/system-generators/torcx-generator[1811]: time="2024-02-08T23:16:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:16:02.037109 /usr/lib/systemd/system-generators/torcx-generator[1811]: time="2024-02-08T23:16:02Z" level=info msg="torcx already run" Feb 8 23:16:02.136088 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:16:02.136107 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:16:02.152168 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:16:02.239837 systemd[1]: Started kubelet.service. Feb 8 23:16:02.311203 kubelet[1873]: E0208 23:16:02.311082 1873 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:16:02.312978 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:16:02.313137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:16:06.502225 env[1331]: time="2024-02-08T23:16:06.502159597Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 8 23:16:07.336494 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4188310337.mount: Deactivated successfully. Feb 8 23:16:09.550038 env[1331]: time="2024-02-08T23:16:09.549968061Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:09.556127 env[1331]: time="2024-02-08T23:16:09.556063858Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:09.560622 env[1331]: time="2024-02-08T23:16:09.560585430Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:09.564598 env[1331]: time="2024-02-08T23:16:09.564561693Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:09.565167 env[1331]: time="2024-02-08T23:16:09.565131502Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 8 23:16:09.575224 env[1331]: time="2024-02-08T23:16:09.575182862Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 8 23:16:11.960718 env[1331]: time="2024-02-08T23:16:11.960657754Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:11.972253 env[1331]: time="2024-02-08T23:16:11.972203227Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:11.978372 env[1331]: time="2024-02-08T23:16:11.978329320Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:11.982043 env[1331]: time="2024-02-08T23:16:11.982005175Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:11.982632 env[1331]: time="2024-02-08T23:16:11.982599084Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 8 23:16:11.995664 env[1331]: time="2024-02-08T23:16:11.995627180Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 8 23:16:12.342615 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 8 23:16:12.342930 systemd[1]: Stopped kubelet.service. Feb 8 23:16:12.345085 systemd[1]: Started kubelet.service. Feb 8 23:16:12.397103 kubelet[1901]: E0208 23:16:12.397047 1901 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:16:12.400107 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:16:12.400266 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:16:13.713168 env[1331]: time="2024-02-08T23:16:13.713108043Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:13.719298 env[1331]: time="2024-02-08T23:16:13.719253730Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:13.724425 env[1331]: time="2024-02-08T23:16:13.724383103Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:13.729537 env[1331]: time="2024-02-08T23:16:13.729498276Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:13.730111 env[1331]: time="2024-02-08T23:16:13.730077084Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 8 23:16:13.739784 env[1331]: time="2024-02-08T23:16:13.739748222Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 8 23:16:14.774623 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount793298047.mount: Deactivated successfully. Feb 8 23:16:15.315016 env[1331]: time="2024-02-08T23:16:15.314961839Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.320469 env[1331]: time="2024-02-08T23:16:15.320422912Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.324029 env[1331]: time="2024-02-08T23:16:15.323989160Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.326487 env[1331]: time="2024-02-08T23:16:15.326447494Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.326902 env[1331]: time="2024-02-08T23:16:15.326869099Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 8 23:16:15.337153 env[1331]: time="2024-02-08T23:16:15.337119937Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 8 23:16:15.783174 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3387238823.mount: Deactivated successfully. Feb 8 23:16:15.803953 env[1331]: time="2024-02-08T23:16:15.803892732Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.812512 env[1331]: time="2024-02-08T23:16:15.812467147Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.816013 env[1331]: time="2024-02-08T23:16:15.815974595Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.822013 env[1331]: time="2024-02-08T23:16:15.821975176Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:15.822501 env[1331]: time="2024-02-08T23:16:15.822471482Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 8 23:16:15.832315 env[1331]: time="2024-02-08T23:16:15.832284815Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 8 23:16:16.548763 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1880449044.mount: Deactivated successfully. Feb 8 23:16:20.651790 env[1331]: time="2024-02-08T23:16:20.651734297Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:20.659700 env[1331]: time="2024-02-08T23:16:20.659654090Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:20.663772 env[1331]: time="2024-02-08T23:16:20.663734938Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:20.669027 env[1331]: time="2024-02-08T23:16:20.668994800Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:20.669616 env[1331]: time="2024-02-08T23:16:20.669583707Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 8 23:16:20.679425 env[1331]: time="2024-02-08T23:16:20.679389622Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 8 23:16:21.162679 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount323301202.mount: Deactivated successfully. Feb 8 23:16:21.902921 env[1331]: time="2024-02-08T23:16:21.902865666Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:21.912509 env[1331]: time="2024-02-08T23:16:21.912462677Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:21.917660 env[1331]: time="2024-02-08T23:16:21.917620536Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:21.923647 env[1331]: time="2024-02-08T23:16:21.923609205Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:21.924184 env[1331]: time="2024-02-08T23:16:21.924146611Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 8 23:16:22.592656 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 8 23:16:22.592980 systemd[1]: Stopped kubelet.service. Feb 8 23:16:22.595251 systemd[1]: Started kubelet.service. Feb 8 23:16:22.645007 kubelet[1933]: E0208 23:16:22.644934 1933 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 8 23:16:22.646707 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:16:22.646866 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:16:24.845704 systemd[1]: Stopped kubelet.service. Feb 8 23:16:24.860253 systemd[1]: Reloading. Feb 8 23:16:24.936447 /usr/lib/systemd/system-generators/torcx-generator[2015]: time="2024-02-08T23:16:24Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:16:24.936907 /usr/lib/systemd/system-generators/torcx-generator[2015]: time="2024-02-08T23:16:24Z" level=info msg="torcx already run" Feb 8 23:16:25.035832 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:16:25.035855 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:16:25.052019 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:16:25.146732 systemd[1]: Started kubelet.service. Feb 8 23:16:25.198591 kubelet[2077]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:16:25.198591 kubelet[2077]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:16:25.198591 kubelet[2077]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:16:25.199158 kubelet[2077]: I0208 23:16:25.198631 2077 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:16:25.424254 kubelet[2077]: I0208 23:16:25.423773 2077 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:16:25.424254 kubelet[2077]: I0208 23:16:25.423800 2077 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:16:25.424254 kubelet[2077]: I0208 23:16:25.424069 2077 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:16:25.428439 kubelet[2077]: E0208 23:16:25.428413 2077 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.40:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.428652 kubelet[2077]: I0208 23:16:25.428637 2077 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:16:25.430894 kubelet[2077]: I0208 23:16:25.430871 2077 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:16:25.431144 kubelet[2077]: I0208 23:16:25.431123 2077 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:16:25.431218 kubelet[2077]: I0208 23:16:25.431209 2077 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:16:25.431346 kubelet[2077]: I0208 23:16:25.431237 2077 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:16:25.431346 kubelet[2077]: I0208 23:16:25.431253 2077 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:16:25.431429 kubelet[2077]: I0208 23:16:25.431369 2077 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:16:25.434194 kubelet[2077]: I0208 23:16:25.434174 2077 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:16:25.434194 kubelet[2077]: I0208 23:16:25.434196 2077 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:16:25.434345 kubelet[2077]: I0208 23:16:25.434217 2077 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:16:25.434345 kubelet[2077]: I0208 23:16:25.434230 2077 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:16:25.434977 kubelet[2077]: W0208 23:16:25.434920 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.435062 kubelet[2077]: E0208 23:16:25.434990 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.435113 kubelet[2077]: I0208 23:16:25.435080 2077 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:16:25.435364 kubelet[2077]: W0208 23:16:25.435344 2077 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:16:25.435841 kubelet[2077]: I0208 23:16:25.435819 2077 server.go:1168] "Started kubelet" Feb 8 23:16:25.442319 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:16:25.444077 kubelet[2077]: I0208 23:16:25.444054 2077 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:16:25.444646 kubelet[2077]: E0208 23:16:25.444538 2077 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-56a09d6613.17b2065864425330", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-56a09d6613", UID:"ci-3510.3.2-a-56a09d6613", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-56a09d6613"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 16, 25, 435796272, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 16, 25, 435796272, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.40:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.40:6443: connect: connection refused'(may retry after sleeping) Feb 8 23:16:25.445075 kubelet[2077]: W0208 23:16:25.445039 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-56a09d6613&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.445219 kubelet[2077]: E0208 23:16:25.445207 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-56a09d6613&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.446324 kubelet[2077]: E0208 23:16:25.446303 2077 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:16:25.446640 kubelet[2077]: E0208 23:16:25.446609 2077 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:16:25.446908 kubelet[2077]: I0208 23:16:25.446890 2077 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:16:25.447557 kubelet[2077]: I0208 23:16:25.446541 2077 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:16:25.447766 kubelet[2077]: I0208 23:16:25.446578 2077 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:16:25.450180 kubelet[2077]: I0208 23:16:25.450162 2077 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:16:25.450386 kubelet[2077]: I0208 23:16:25.450370 2077 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:16:25.450820 kubelet[2077]: W0208 23:16:25.450782 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.450931 kubelet[2077]: E0208 23:16:25.450917 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.451822 kubelet[2077]: E0208 23:16:25.451807 2077 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-56a09d6613?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="200ms" Feb 8 23:16:25.465605 kubelet[2077]: I0208 23:16:25.465576 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:16:25.468457 kubelet[2077]: I0208 23:16:25.468436 2077 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:16:25.468562 kubelet[2077]: I0208 23:16:25.468464 2077 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:16:25.468562 kubelet[2077]: I0208 23:16:25.468483 2077 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:16:25.468562 kubelet[2077]: E0208 23:16:25.468532 2077 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:16:25.476376 kubelet[2077]: W0208 23:16:25.476333 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.476480 kubelet[2077]: E0208 23:16:25.476381 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:25.510331 kubelet[2077]: I0208 23:16:25.510299 2077 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:16:25.510331 kubelet[2077]: I0208 23:16:25.510322 2077 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:16:25.510550 kubelet[2077]: I0208 23:16:25.510346 2077 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:16:25.515249 kubelet[2077]: I0208 23:16:25.515222 2077 policy_none.go:49] "None policy: Start" Feb 8 23:16:25.515820 kubelet[2077]: I0208 23:16:25.515800 2077 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:16:25.515919 kubelet[2077]: I0208 23:16:25.515838 2077 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:16:25.523876 systemd[1]: Created slice kubepods.slice. Feb 8 23:16:25.527984 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:16:25.530973 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:16:25.536585 kubelet[2077]: I0208 23:16:25.536557 2077 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:16:25.536931 kubelet[2077]: I0208 23:16:25.536780 2077 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:16:25.539170 kubelet[2077]: E0208 23:16:25.538998 2077 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-56a09d6613\" not found" Feb 8 23:16:25.550803 kubelet[2077]: I0208 23:16:25.550785 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.551144 kubelet[2077]: E0208 23:16:25.551126 2077 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.569539 kubelet[2077]: I0208 23:16:25.569515 2077 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:25.570935 kubelet[2077]: I0208 23:16:25.570917 2077 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:25.572123 kubelet[2077]: I0208 23:16:25.572102 2077 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:25.577763 systemd[1]: Created slice kubepods-burstable-pod52537d8c331c63a6ce563971b9d76b11.slice. Feb 8 23:16:25.588093 systemd[1]: Created slice kubepods-burstable-pod6ea884b496b0fce0f9bc0e39fa218239.slice. Feb 8 23:16:25.592064 systemd[1]: Created slice kubepods-burstable-pod105d3a3a2802751d7a8fd1e7937f1062.slice. Feb 8 23:16:25.652963 kubelet[2077]: E0208 23:16:25.652912 2077 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-56a09d6613?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="400ms" Feb 8 23:16:25.754258 kubelet[2077]: I0208 23:16:25.752292 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754258 kubelet[2077]: I0208 23:16:25.752350 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754258 kubelet[2077]: I0208 23:16:25.752385 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754258 kubelet[2077]: I0208 23:16:25.752423 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754258 kubelet[2077]: I0208 23:16:25.752460 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754573 kubelet[2077]: I0208 23:16:25.752495 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/105d3a3a2802751d7a8fd1e7937f1062-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-56a09d6613\" (UID: \"105d3a3a2802751d7a8fd1e7937f1062\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754573 kubelet[2077]: I0208 23:16:25.752534 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754573 kubelet[2077]: I0208 23:16:25.752571 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.754573 kubelet[2077]: I0208 23:16:25.752608 2077 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.755256 kubelet[2077]: I0208 23:16:25.755234 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.755677 kubelet[2077]: E0208 23:16:25.755654 2077 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:25.887278 env[1331]: time="2024-02-08T23:16:25.887221539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-56a09d6613,Uid:52537d8c331c63a6ce563971b9d76b11,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:25.895315 env[1331]: time="2024-02-08T23:16:25.895268822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-56a09d6613,Uid:105d3a3a2802751d7a8fd1e7937f1062,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:25.895554 env[1331]: time="2024-02-08T23:16:25.895269022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-56a09d6613,Uid:6ea884b496b0fce0f9bc0e39fa218239,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:26.054395 kubelet[2077]: E0208 23:16:26.054266 2077 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-56a09d6613?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="800ms" Feb 8 23:16:26.157543 kubelet[2077]: I0208 23:16:26.157500 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:26.157952 kubelet[2077]: E0208 23:16:26.157909 2077 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:26.320477 kubelet[2077]: W0208 23:16:26.320108 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.320477 kubelet[2077]: E0208 23:16:26.320178 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.40:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.330591 kubelet[2077]: W0208 23:16:26.330544 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.330591 kubelet[2077]: E0208 23:16:26.330595 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.40:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.394823 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1579976058.mount: Deactivated successfully. Feb 8 23:16:26.429082 env[1331]: time="2024-02-08T23:16:26.429026129Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.433274 env[1331]: time="2024-02-08T23:16:26.433227771Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.451695 env[1331]: time="2024-02-08T23:16:26.451654557Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.454938 env[1331]: time="2024-02-08T23:16:26.454900990Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.458039 env[1331]: time="2024-02-08T23:16:26.458004621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.460456 env[1331]: time="2024-02-08T23:16:26.460420645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.472634 env[1331]: time="2024-02-08T23:16:26.472594968Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.478348 env[1331]: time="2024-02-08T23:16:26.478311926Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.481843 env[1331]: time="2024-02-08T23:16:26.481809961Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.487548 env[1331]: time="2024-02-08T23:16:26.487515618Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.495279 env[1331]: time="2024-02-08T23:16:26.495248196Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.507832 env[1331]: time="2024-02-08T23:16:26.507795923Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:26.554328 env[1331]: time="2024-02-08T23:16:26.550244851Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:26.554328 env[1331]: time="2024-02-08T23:16:26.550306551Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:26.554328 env[1331]: time="2024-02-08T23:16:26.550326751Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:26.554328 env[1331]: time="2024-02-08T23:16:26.550488953Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8 pid=2116 runtime=io.containerd.runc.v2 Feb 8 23:16:26.584122 systemd[1]: Started cri-containerd-276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8.scope. Feb 8 23:16:26.591306 env[1331]: time="2024-02-08T23:16:26.589547547Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:26.591306 env[1331]: time="2024-02-08T23:16:26.589647048Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:26.591306 env[1331]: time="2024-02-08T23:16:26.589678748Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:26.591306 env[1331]: time="2024-02-08T23:16:26.589850750Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6 pid=2143 runtime=io.containerd.runc.v2 Feb 8 23:16:26.598636 kubelet[2077]: W0208 23:16:26.598077 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.598636 kubelet[2077]: E0208 23:16:26.598132 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.40:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.618981 env[1331]: time="2024-02-08T23:16:26.618765041Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:26.618981 env[1331]: time="2024-02-08T23:16:26.618807941Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:26.618981 env[1331]: time="2024-02-08T23:16:26.618822542Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:26.619232 env[1331]: time="2024-02-08T23:16:26.619110245Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed1a45011cb63c02683bc8749f1843b12ec34c3463c45d0e4780a7b8d15bedfe pid=2173 runtime=io.containerd.runc.v2 Feb 8 23:16:26.623741 systemd[1]: Started cri-containerd-84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6.scope. Feb 8 23:16:26.645827 systemd[1]: Started cri-containerd-ed1a45011cb63c02683bc8749f1843b12ec34c3463c45d0e4780a7b8d15bedfe.scope. Feb 8 23:16:26.680174 env[1331]: time="2024-02-08T23:16:26.680121759Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-56a09d6613,Uid:52537d8c331c63a6ce563971b9d76b11,Namespace:kube-system,Attempt:0,} returns sandbox id \"276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8\"" Feb 8 23:16:26.686290 env[1331]: time="2024-02-08T23:16:26.686243921Z" level=info msg="CreateContainer within sandbox \"276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 8 23:16:26.720103 env[1331]: time="2024-02-08T23:16:26.720038162Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-56a09d6613,Uid:105d3a3a2802751d7a8fd1e7937f1062,Namespace:kube-system,Attempt:0,} returns sandbox id \"84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6\"" Feb 8 23:16:26.723184 env[1331]: time="2024-02-08T23:16:26.723148093Z" level=info msg="CreateContainer within sandbox \"84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 8 23:16:26.729602 env[1331]: time="2024-02-08T23:16:26.729553557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-56a09d6613,Uid:6ea884b496b0fce0f9bc0e39fa218239,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed1a45011cb63c02683bc8749f1843b12ec34c3463c45d0e4780a7b8d15bedfe\"" Feb 8 23:16:26.730432 env[1331]: time="2024-02-08T23:16:26.730408066Z" level=info msg="CreateContainer within sandbox \"276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315\"" Feb 8 23:16:26.731153 env[1331]: time="2024-02-08T23:16:26.731133873Z" level=info msg="StartContainer for \"df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315\"" Feb 8 23:16:26.734490 env[1331]: time="2024-02-08T23:16:26.734462107Z" level=info msg="CreateContainer within sandbox \"ed1a45011cb63c02683bc8749f1843b12ec34c3463c45d0e4780a7b8d15bedfe\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 8 23:16:26.749487 systemd[1]: Started cri-containerd-df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315.scope. Feb 8 23:16:26.792095 env[1331]: time="2024-02-08T23:16:26.792029187Z" level=info msg="CreateContainer within sandbox \"84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143\"" Feb 8 23:16:26.792664 env[1331]: time="2024-02-08T23:16:26.792627093Z" level=info msg="StartContainer for \"5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143\"" Feb 8 23:16:26.804916 env[1331]: time="2024-02-08T23:16:26.804876216Z" level=info msg="StartContainer for \"df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315\" returns successfully" Feb 8 23:16:26.809534 env[1331]: time="2024-02-08T23:16:26.809496663Z" level=info msg="CreateContainer within sandbox \"ed1a45011cb63c02683bc8749f1843b12ec34c3463c45d0e4780a7b8d15bedfe\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"c8648a321504273d183c273eaa7201d89aa547551494868898fc278ccc4c5bd8\"" Feb 8 23:16:26.810272 env[1331]: time="2024-02-08T23:16:26.810236570Z" level=info msg="StartContainer for \"c8648a321504273d183c273eaa7201d89aa547551494868898fc278ccc4c5bd8\"" Feb 8 23:16:26.823349 systemd[1]: Started cri-containerd-5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143.scope. Feb 8 23:16:26.841949 systemd[1]: Started cri-containerd-c8648a321504273d183c273eaa7201d89aa547551494868898fc278ccc4c5bd8.scope. Feb 8 23:16:26.855159 kubelet[2077]: E0208 23:16:26.854842 2077 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-56a09d6613?timeout=10s\": dial tcp 10.200.8.40:6443: connect: connection refused" interval="1.6s" Feb 8 23:16:26.877603 kubelet[2077]: W0208 23:16:26.877532 2077 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-56a09d6613&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.877739 kubelet[2077]: E0208 23:16:26.877621 2077 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.40:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-56a09d6613&limit=500&resourceVersion=0": dial tcp 10.200.8.40:6443: connect: connection refused Feb 8 23:16:26.959760 kubelet[2077]: I0208 23:16:26.959730 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:26.960121 kubelet[2077]: E0208 23:16:26.960101 2077 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.40:6443/api/v1/nodes\": dial tcp 10.200.8.40:6443: connect: connection refused" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:27.397673 env[1331]: time="2024-02-08T23:16:27.397615388Z" level=info msg="StartContainer for \"5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143\" returns successfully" Feb 8 23:16:27.398815 env[1331]: time="2024-02-08T23:16:27.398767399Z" level=info msg="StartContainer for \"c8648a321504273d183c273eaa7201d89aa547551494868898fc278ccc4c5bd8\" returns successfully" Feb 8 23:16:28.562302 kubelet[2077]: I0208 23:16:28.562265 2077 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:28.945748 kubelet[2077]: I0208 23:16:28.945694 2077 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:28.973636 kubelet[2077]: E0208 23:16:28.973606 2077 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-56a09d6613\" not found" Feb 8 23:16:29.012819 kubelet[2077]: E0208 23:16:29.012783 2077 controller.go:146] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Feb 8 23:16:29.074298 kubelet[2077]: E0208 23:16:29.074259 2077 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-56a09d6613\" not found" Feb 8 23:16:29.174818 kubelet[2077]: E0208 23:16:29.174769 2077 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-56a09d6613\" not found" Feb 8 23:16:29.275401 kubelet[2077]: E0208 23:16:29.275266 2077 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-56a09d6613\" not found" Feb 8 23:16:29.447215 kubelet[2077]: I0208 23:16:29.447167 2077 apiserver.go:52] "Watching apiserver" Feb 8 23:16:29.451274 kubelet[2077]: I0208 23:16:29.451242 2077 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:16:29.477469 kubelet[2077]: I0208 23:16:29.477437 2077 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:16:30.603524 kubelet[2077]: W0208 23:16:30.603487 2077 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:16:30.615936 kubelet[2077]: W0208 23:16:30.615910 2077 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:16:31.739595 systemd[1]: Reloading. Feb 8 23:16:31.841809 /usr/lib/systemd/system-generators/torcx-generator[2370]: time="2024-02-08T23:16:31Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:16:31.841850 /usr/lib/systemd/system-generators/torcx-generator[2370]: time="2024-02-08T23:16:31Z" level=info msg="torcx already run" Feb 8 23:16:31.924163 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:16:31.924184 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:16:31.940963 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:16:32.060327 systemd[1]: Stopping kubelet.service... Feb 8 23:16:32.077722 systemd[1]: kubelet.service: Deactivated successfully. Feb 8 23:16:32.077979 systemd[1]: Stopped kubelet.service. Feb 8 23:16:32.080095 systemd[1]: Started kubelet.service. Feb 8 23:16:32.163100 kubelet[2432]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:16:32.163100 kubelet[2432]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:16:32.163100 kubelet[2432]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:16:32.163100 kubelet[2432]: I0208 23:16:32.160543 2432 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:16:32.166778 kubelet[2432]: I0208 23:16:32.166756 2432 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 8 23:16:32.167006 kubelet[2432]: I0208 23:16:32.166992 2432 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:16:32.167258 kubelet[2432]: I0208 23:16:32.167245 2432 server.go:837] "Client rotation is on, will bootstrap in background" Feb 8 23:16:32.168791 kubelet[2432]: I0208 23:16:32.168774 2432 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 8 23:16:32.170399 kubelet[2432]: I0208 23:16:32.170380 2432 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:16:32.175388 kubelet[2432]: I0208 23:16:32.175359 2432 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:16:32.175713 kubelet[2432]: I0208 23:16:32.175695 2432 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:16:32.175859 kubelet[2432]: I0208 23:16:32.175798 2432 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 8 23:16:32.175859 kubelet[2432]: I0208 23:16:32.175827 2432 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 8 23:16:32.175859 kubelet[2432]: I0208 23:16:32.175842 2432 container_manager_linux.go:302] "Creating device plugin manager" Feb 8 23:16:32.176109 kubelet[2432]: I0208 23:16:32.175885 2432 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:16:32.180844 kubelet[2432]: I0208 23:16:32.180824 2432 kubelet.go:405] "Attempting to sync node with API server" Feb 8 23:16:32.180844 kubelet[2432]: I0208 23:16:32.180847 2432 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:16:32.183015 kubelet[2432]: I0208 23:16:32.183000 2432 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:16:32.185020 kubelet[2432]: I0208 23:16:32.184996 2432 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:16:32.188393 kubelet[2432]: I0208 23:16:32.188377 2432 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:16:32.189027 kubelet[2432]: I0208 23:16:32.189009 2432 server.go:1168] "Started kubelet" Feb 8 23:16:32.202806 kubelet[2432]: I0208 23:16:32.202777 2432 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:16:32.209498 kubelet[2432]: E0208 23:16:32.209478 2432 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:16:32.209654 kubelet[2432]: E0208 23:16:32.209644 2432 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:16:32.212495 kubelet[2432]: I0208 23:16:32.212473 2432 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 8 23:16:32.213088 kubelet[2432]: I0208 23:16:32.213061 2432 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 8 23:16:32.221370 kubelet[2432]: I0208 23:16:32.221350 2432 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:16:32.222350 kubelet[2432]: I0208 23:16:32.222329 2432 server.go:461] "Adding debug handlers to kubelet server" Feb 8 23:16:32.223736 kubelet[2432]: I0208 23:16:32.223719 2432 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:16:32.247420 kubelet[2432]: I0208 23:16:32.247396 2432 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 8 23:16:32.255194 kubelet[2432]: I0208 23:16:32.255160 2432 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 8 23:16:32.255387 kubelet[2432]: I0208 23:16:32.255373 2432 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 8 23:16:32.255505 kubelet[2432]: I0208 23:16:32.255493 2432 kubelet.go:2257] "Starting kubelet main sync loop" Feb 8 23:16:32.255679 kubelet[2432]: E0208 23:16:32.255667 2432 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:16:32.285726 kubelet[2432]: I0208 23:16:32.285698 2432 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:16:32.285884 kubelet[2432]: I0208 23:16:32.285877 2432 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:16:32.285957 kubelet[2432]: I0208 23:16:32.285934 2432 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:16:32.286124 kubelet[2432]: I0208 23:16:32.286105 2432 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 8 23:16:32.286124 kubelet[2432]: I0208 23:16:32.286126 2432 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 8 23:16:32.286247 kubelet[2432]: I0208 23:16:32.286134 2432 policy_none.go:49] "None policy: Start" Feb 8 23:16:32.286728 kubelet[2432]: I0208 23:16:32.286704 2432 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:16:32.286728 kubelet[2432]: I0208 23:16:32.286730 2432 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:16:32.286896 kubelet[2432]: I0208 23:16:32.286879 2432 state_mem.go:75] "Updated machine memory state" Feb 8 23:16:32.290271 kubelet[2432]: I0208 23:16:32.290258 2432 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:16:32.290515 kubelet[2432]: I0208 23:16:32.290505 2432 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:16:32.316443 kubelet[2432]: I0208 23:16:32.315595 2432 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.327593 kubelet[2432]: I0208 23:16:32.327565 2432 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.327731 kubelet[2432]: I0208 23:16:32.327652 2432 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.356150 kubelet[2432]: I0208 23:16:32.356116 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:32.356300 kubelet[2432]: I0208 23:16:32.356241 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:32.356300 kubelet[2432]: I0208 23:16:32.356288 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:32.364779 kubelet[2432]: W0208 23:16:32.364755 2432 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:16:32.365043 kubelet[2432]: W0208 23:16:32.364906 2432 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:16:32.365132 kubelet[2432]: E0208 23:16:32.365088 2432 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.367362 kubelet[2432]: W0208 23:16:32.367340 2432 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 8 23:16:32.367476 kubelet[2432]: E0208 23:16:32.367411 2432 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414200 kubelet[2432]: I0208 23:16:32.414164 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414200 kubelet[2432]: I0208 23:16:32.414216 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414200 kubelet[2432]: I0208 23:16:32.414252 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414776 kubelet[2432]: I0208 23:16:32.414286 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414776 kubelet[2432]: I0208 23:16:32.414318 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6ea884b496b0fce0f9bc0e39fa218239-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-56a09d6613\" (UID: \"6ea884b496b0fce0f9bc0e39fa218239\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414776 kubelet[2432]: I0208 23:16:32.414350 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414776 kubelet[2432]: I0208 23:16:32.414383 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:32.414776 kubelet[2432]: I0208 23:16:32.414418 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/52537d8c331c63a6ce563971b9d76b11-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-56a09d6613\" (UID: \"52537d8c331c63a6ce563971b9d76b11\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:32.414449 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/105d3a3a2802751d7a8fd1e7937f1062-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-56a09d6613\" (UID: \"105d3a3a2802751d7a8fd1e7937f1062\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-56a09d6613" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:33.186262 2432 apiserver.go:52] "Watching apiserver" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:33.214425 2432 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:33.218104 2432 reconciler.go:41] "Reconciler: start to sync state" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:33.242115 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-56a09d6613" podStartSLOduration=3.24205451 podCreationTimestamp="2024-02-08 23:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:16:33.241642606 +0000 UTC m=+1.156119665" watchObservedRunningTime="2024-02-08 23:16:33.24205451 +0000 UTC m=+1.156531669" Feb 8 23:16:33.458805 kubelet[2432]: I0208 23:16:33.249274 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-56a09d6613" podStartSLOduration=1.24923727 podCreationTimestamp="2024-02-08 23:16:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:16:33.24921367 +0000 UTC m=+1.163690729" watchObservedRunningTime="2024-02-08 23:16:33.24923727 +0000 UTC m=+1.163714329" Feb 8 23:16:33.459135 kubelet[2432]: I0208 23:16:33.267061 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" podStartSLOduration=3.267021821 podCreationTimestamp="2024-02-08 23:16:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:16:33.256484132 +0000 UTC m=+1.170961191" watchObservedRunningTime="2024-02-08 23:16:33.267021821 +0000 UTC m=+1.181498980" Feb 8 23:16:33.466588 sudo[2462]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 8 23:16:33.467379 sudo[2462]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 8 23:16:33.984443 sudo[2462]: pam_unix(sudo:session): session closed for user root Feb 8 23:16:35.192486 sudo[1667]: pam_unix(sudo:session): session closed for user root Feb 8 23:16:35.292868 sshd[1664]: pam_unix(sshd:session): session closed for user core Feb 8 23:16:35.296499 systemd[1]: sshd@4-10.200.8.40:22-10.200.12.6:54894.service: Deactivated successfully. Feb 8 23:16:35.297626 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:16:35.297871 systemd[1]: session-7.scope: Consumed 3.879s CPU time. Feb 8 23:16:35.298687 systemd-logind[1316]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:16:35.299713 systemd-logind[1316]: Removed session 7. Feb 8 23:16:47.356822 kubelet[2432]: I0208 23:16:47.356789 2432 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 8 23:16:47.357410 env[1331]: time="2024-02-08T23:16:47.357234943Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:16:47.357726 kubelet[2432]: I0208 23:16:47.357470 2432 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 8 23:16:48.135665 kubelet[2432]: I0208 23:16:48.135602 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:48.143532 systemd[1]: Created slice kubepods-besteffort-pod44d4f20f_f8e6_419a_974b_61dcb53e29f5.slice. Feb 8 23:16:48.148702 kubelet[2432]: W0208 23:16:48.148678 2432 reflector.go:533] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-56a09d6613" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-56a09d6613' and this object Feb 8 23:16:48.148870 kubelet[2432]: E0208 23:16:48.148856 2432 reflector.go:148] object-"kube-system"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3510.3.2-a-56a09d6613" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-56a09d6613' and this object Feb 8 23:16:48.150118 kubelet[2432]: W0208 23:16:48.150102 2432 reflector.go:533] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-56a09d6613" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-56a09d6613' and this object Feb 8 23:16:48.150236 kubelet[2432]: E0208 23:16:48.150225 2432 reflector.go:148] object-"kube-system"/"kube-proxy": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:ci-3510.3.2-a-56a09d6613" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-56a09d6613' and this object Feb 8 23:16:48.155858 kubelet[2432]: I0208 23:16:48.155835 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:48.161597 systemd[1]: Created slice kubepods-burstable-podeabe1d2a_bed6_49db_bd12_ea72995180a2.slice. Feb 8 23:16:48.209937 kubelet[2432]: I0208 23:16:48.209889 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabe1d2a-bed6-49db-bd12-ea72995180a2-clustermesh-secrets\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.209937 kubelet[2432]: I0208 23:16:48.209953 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-net\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.209987 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/44d4f20f-f8e6-419a-974b-61dcb53e29f5-lib-modules\") pod \"kube-proxy-6gf7z\" (UID: \"44d4f20f-f8e6-419a-974b-61dcb53e29f5\") " pod="kube-system/kube-proxy-6gf7z" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.210010 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-cgroup\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.210034 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-xtables-lock\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.210056 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-etc-cni-netd\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.210083 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jgbfb\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210190 kubelet[2432]: I0208 23:16:48.210108 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/44d4f20f-f8e6-419a-974b-61dcb53e29f5-xtables-lock\") pod \"kube-proxy-6gf7z\" (UID: \"44d4f20f-f8e6-419a-974b-61dcb53e29f5\") " pod="kube-system/kube-proxy-6gf7z" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210133 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-lib-modules\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210158 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-hubble-tls\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210189 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-run\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210216 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-hostproc\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210245 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cni-path\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210486 kubelet[2432]: I0208 23:16:48.210273 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-bpf-maps\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210717 kubelet[2432]: I0208 23:16:48.210305 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-config-path\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210717 kubelet[2432]: I0208 23:16:48.210335 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-kernel\") pod \"cilium-7q57n\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " pod="kube-system/cilium-7q57n" Feb 8 23:16:48.210717 kubelet[2432]: I0208 23:16:48.210364 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-proxy\") pod \"kube-proxy-6gf7z\" (UID: \"44d4f20f-f8e6-419a-974b-61dcb53e29f5\") " pod="kube-system/kube-proxy-6gf7z" Feb 8 23:16:48.210717 kubelet[2432]: I0208 23:16:48.210396 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tmhlq\" (UniqueName: \"kubernetes.io/projected/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-api-access-tmhlq\") pod \"kube-proxy-6gf7z\" (UID: \"44d4f20f-f8e6-419a-974b-61dcb53e29f5\") " pod="kube-system/kube-proxy-6gf7z" Feb 8 23:16:48.235179 kubelet[2432]: I0208 23:16:48.235140 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:16:48.240752 systemd[1]: Created slice kubepods-besteffort-pod5b4d5374_490b_4a7f_a817_cec3120c47cf.slice. Feb 8 23:16:48.411599 kubelet[2432]: I0208 23:16:48.411463 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b4d5374-490b-4a7f-a817-cec3120c47cf-cilium-config-path\") pod \"cilium-operator-574c4bb98d-ww58j\" (UID: \"5b4d5374-490b-4a7f-a817-cec3120c47cf\") " pod="kube-system/cilium-operator-574c4bb98d-ww58j" Feb 8 23:16:48.411599 kubelet[2432]: I0208 23:16:48.411532 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvn64\" (UniqueName: \"kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64\") pod \"cilium-operator-574c4bb98d-ww58j\" (UID: \"5b4d5374-490b-4a7f-a817-cec3120c47cf\") " pod="kube-system/cilium-operator-574c4bb98d-ww58j" Feb 8 23:16:49.313499 kubelet[2432]: E0208 23:16:49.313470 2432 configmap.go:199] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.313688 kubelet[2432]: E0208 23:16:49.313562 2432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-proxy podName:44d4f20f-f8e6-419a-974b-61dcb53e29f5 nodeName:}" failed. No retries permitted until 2024-02-08 23:16:49.813539683 +0000 UTC m=+17.728016742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-proxy") pod "kube-proxy-6gf7z" (UID: "44d4f20f-f8e6-419a-974b-61dcb53e29f5") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.334466 kubelet[2432]: E0208 23:16:49.334427 2432 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.334466 kubelet[2432]: E0208 23:16:49.334465 2432 projected.go:198] Error preparing data for projected volume kube-api-access-jgbfb for pod kube-system/cilium-7q57n: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.334679 kubelet[2432]: E0208 23:16:49.334546 2432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb podName:eabe1d2a-bed6-49db-bd12-ea72995180a2 nodeName:}" failed. No retries permitted until 2024-02-08 23:16:49.834526406 +0000 UTC m=+17.749003565 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-jgbfb" (UniqueName: "kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb") pod "cilium-7q57n" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.334801 kubelet[2432]: E0208 23:16:49.334426 2432 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.334896 kubelet[2432]: E0208 23:16:49.334883 2432 projected.go:198] Error preparing data for projected volume kube-api-access-tmhlq for pod kube-system/kube-proxy-6gf7z: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.335036 kubelet[2432]: E0208 23:16:49.335022 2432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-api-access-tmhlq podName:44d4f20f-f8e6-419a-974b-61dcb53e29f5 nodeName:}" failed. No retries permitted until 2024-02-08 23:16:49.835001409 +0000 UTC m=+17.749478468 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-tmhlq" (UniqueName: "kubernetes.io/projected/44d4f20f-f8e6-419a-974b-61dcb53e29f5-kube-api-access-tmhlq") pod "kube-proxy-6gf7z" (UID: "44d4f20f-f8e6-419a-974b-61dcb53e29f5") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.520085 kubelet[2432]: E0208 23:16:49.520053 2432 projected.go:292] Couldn't get configMap kube-system/kube-root-ca.crt: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.520085 kubelet[2432]: E0208 23:16:49.520083 2432 projected.go:198] Error preparing data for projected volume kube-api-access-lvn64 for pod kube-system/cilium-operator-574c4bb98d-ww58j: failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.520561 kubelet[2432]: E0208 23:16:49.520140 2432 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64 podName:5b4d5374-490b-4a7f-a817-cec3120c47cf nodeName:}" failed. No retries permitted until 2024-02-08 23:16:50.020123492 +0000 UTC m=+17.934600551 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lvn64" (UniqueName: "kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64") pod "cilium-operator-574c4bb98d-ww58j" (UID: "5b4d5374-490b-4a7f-a817-cec3120c47cf") : failed to sync configmap cache: timed out waiting for the condition Feb 8 23:16:49.950913 env[1331]: time="2024-02-08T23:16:49.950860212Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gf7z,Uid:44d4f20f-f8e6-419a-974b-61dcb53e29f5,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:49.966562 env[1331]: time="2024-02-08T23:16:49.966517104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7q57n,Uid:eabe1d2a-bed6-49db-bd12-ea72995180a2,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:49.995723 env[1331]: time="2024-02-08T23:16:49.991724151Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:49.995723 env[1331]: time="2024-02-08T23:16:49.991768052Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:49.995723 env[1331]: time="2024-02-08T23:16:49.991781652Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:49.995723 env[1331]: time="2024-02-08T23:16:49.991928553Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/27e3c1e09a4ac13c72a7d3d9a3d67dda630f79802f76ac16e5ecfdba30999da8 pid=2513 runtime=io.containerd.runc.v2 Feb 8 23:16:50.020134 systemd[1]: Started cri-containerd-27e3c1e09a4ac13c72a7d3d9a3d67dda630f79802f76ac16e5ecfdba30999da8.scope. Feb 8 23:16:50.034385 env[1331]: time="2024-02-08T23:16:50.033873694Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:50.034385 env[1331]: time="2024-02-08T23:16:50.033950694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:50.034385 env[1331]: time="2024-02-08T23:16:50.034020295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:50.034385 env[1331]: time="2024-02-08T23:16:50.034287196Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881 pid=2547 runtime=io.containerd.runc.v2 Feb 8 23:16:50.054769 systemd[1]: Started cri-containerd-6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881.scope. Feb 8 23:16:50.058375 env[1331]: time="2024-02-08T23:16:50.058320534Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-6gf7z,Uid:44d4f20f-f8e6-419a-974b-61dcb53e29f5,Namespace:kube-system,Attempt:0,} returns sandbox id \"27e3c1e09a4ac13c72a7d3d9a3d67dda630f79802f76ac16e5ecfdba30999da8\"" Feb 8 23:16:50.064966 env[1331]: time="2024-02-08T23:16:50.064911772Z" level=info msg="CreateContainer within sandbox \"27e3c1e09a4ac13c72a7d3d9a3d67dda630f79802f76ac16e5ecfdba30999da8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:16:50.084544 env[1331]: time="2024-02-08T23:16:50.084507984Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-7q57n,Uid:eabe1d2a-bed6-49db-bd12-ea72995180a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\"" Feb 8 23:16:50.086384 env[1331]: time="2024-02-08T23:16:50.086328395Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:16:50.113581 env[1331]: time="2024-02-08T23:16:50.113532750Z" level=info msg="CreateContainer within sandbox \"27e3c1e09a4ac13c72a7d3d9a3d67dda630f79802f76ac16e5ecfdba30999da8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"350eb81065070bea7a1ac7303f9a1aac1b20f5e8ab7643aca0869a66496ccbf9\"" Feb 8 23:16:50.115981 env[1331]: time="2024-02-08T23:16:50.115920664Z" level=info msg="StartContainer for \"350eb81065070bea7a1ac7303f9a1aac1b20f5e8ab7643aca0869a66496ccbf9\"" Feb 8 23:16:50.137071 systemd[1]: Started cri-containerd-350eb81065070bea7a1ac7303f9a1aac1b20f5e8ab7643aca0869a66496ccbf9.scope. Feb 8 23:16:50.176403 env[1331]: time="2024-02-08T23:16:50.176355210Z" level=info msg="StartContainer for \"350eb81065070bea7a1ac7303f9a1aac1b20f5e8ab7643aca0869a66496ccbf9\" returns successfully" Feb 8 23:16:50.308073 kubelet[2432]: I0208 23:16:50.307935 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-6gf7z" podStartSLOduration=2.307897364 podCreationTimestamp="2024-02-08 23:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:16:50.307537862 +0000 UTC m=+18.222014921" watchObservedRunningTime="2024-02-08 23:16:50.307897364 +0000 UTC m=+18.222374423" Feb 8 23:16:50.347019 env[1331]: time="2024-02-08T23:16:50.346629286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ww58j,Uid:5b4d5374-490b-4a7f-a817-cec3120c47cf,Namespace:kube-system,Attempt:0,}" Feb 8 23:16:50.388570 env[1331]: time="2024-02-08T23:16:50.387536320Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:16:50.388570 env[1331]: time="2024-02-08T23:16:50.387581020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:16:50.388570 env[1331]: time="2024-02-08T23:16:50.387595420Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:16:50.388570 env[1331]: time="2024-02-08T23:16:50.387842722Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19 pid=2700 runtime=io.containerd.runc.v2 Feb 8 23:16:50.403521 systemd[1]: Started cri-containerd-f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19.scope. Feb 8 23:16:50.448669 env[1331]: time="2024-02-08T23:16:50.448620370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-ww58j,Uid:5b4d5374-490b-4a7f-a817-cec3120c47cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\"" Feb 8 23:16:55.593723 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1138821972.mount: Deactivated successfully. Feb 8 23:16:58.316466 env[1331]: time="2024-02-08T23:16:58.316406644Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:58.323493 env[1331]: time="2024-02-08T23:16:58.323452078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:58.328100 env[1331]: time="2024-02-08T23:16:58.328063501Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:16:58.328720 env[1331]: time="2024-02-08T23:16:58.328688204Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:16:58.329789 env[1331]: time="2024-02-08T23:16:58.329756609Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:16:58.331870 env[1331]: time="2024-02-08T23:16:58.331838619Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:16:58.361142 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount245084995.mount: Deactivated successfully. Feb 8 23:16:58.369878 env[1331]: time="2024-02-08T23:16:58.369834604Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\"" Feb 8 23:16:58.370402 env[1331]: time="2024-02-08T23:16:58.370308907Z" level=info msg="StartContainer for \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\"" Feb 8 23:16:58.395958 systemd[1]: Started cri-containerd-845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a.scope. Feb 8 23:16:58.425368 env[1331]: time="2024-02-08T23:16:58.425316875Z" level=info msg="StartContainer for \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\" returns successfully" Feb 8 23:16:58.431116 systemd[1]: cri-containerd-845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a.scope: Deactivated successfully. Feb 8 23:16:59.356444 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a-rootfs.mount: Deactivated successfully. Feb 8 23:17:02.136720 env[1331]: time="2024-02-08T23:17:02.136628478Z" level=info msg="shim disconnected" id=845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a Feb 8 23:17:02.136720 env[1331]: time="2024-02-08T23:17:02.136719279Z" level=warning msg="cleaning up after shim disconnected" id=845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a namespace=k8s.io Feb 8 23:17:02.137342 env[1331]: time="2024-02-08T23:17:02.136733979Z" level=info msg="cleaning up dead shim" Feb 8 23:17:02.146564 env[1331]: time="2024-02-08T23:17:02.146517823Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2834 runtime=io.containerd.runc.v2\n" Feb 8 23:17:02.340987 env[1331]: time="2024-02-08T23:17:02.340926103Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:17:02.378441 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4107163667.mount: Deactivated successfully. Feb 8 23:17:02.392841 env[1331]: time="2024-02-08T23:17:02.392745337Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\"" Feb 8 23:17:02.393575 env[1331]: time="2024-02-08T23:17:02.393545041Z" level=info msg="StartContainer for \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\"" Feb 8 23:17:02.412969 systemd[1]: Started cri-containerd-622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723.scope. Feb 8 23:17:02.457329 env[1331]: time="2024-02-08T23:17:02.457283829Z" level=info msg="StartContainer for \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\" returns successfully" Feb 8 23:17:02.462866 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:17:02.463170 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:17:02.464686 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:17:02.466824 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:17:02.470679 systemd[1]: cri-containerd-622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723.scope: Deactivated successfully. Feb 8 23:17:02.481541 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:17:02.517377 env[1331]: time="2024-02-08T23:17:02.517331000Z" level=info msg="shim disconnected" id=622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723 Feb 8 23:17:02.517377 env[1331]: time="2024-02-08T23:17:02.517375401Z" level=warning msg="cleaning up after shim disconnected" id=622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723 namespace=k8s.io Feb 8 23:17:02.517622 env[1331]: time="2024-02-08T23:17:02.517387201Z" level=info msg="cleaning up dead shim" Feb 8 23:17:02.526902 env[1331]: time="2024-02-08T23:17:02.526861544Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:02Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2900 runtime=io.containerd.runc.v2\n" Feb 8 23:17:03.357207 env[1331]: time="2024-02-08T23:17:03.357159670Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:17:03.374798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723-rootfs.mount: Deactivated successfully. Feb 8 23:17:03.387606 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3977671123.mount: Deactivated successfully. Feb 8 23:17:03.395285 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount792268526.mount: Deactivated successfully. Feb 8 23:17:03.415290 env[1331]: time="2024-02-08T23:17:03.415239028Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\"" Feb 8 23:17:03.416185 env[1331]: time="2024-02-08T23:17:03.416156732Z" level=info msg="StartContainer for \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\"" Feb 8 23:17:03.454926 systemd[1]: Started cri-containerd-28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8.scope. Feb 8 23:17:03.501542 systemd[1]: cri-containerd-28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8.scope: Deactivated successfully. Feb 8 23:17:03.508303 env[1331]: time="2024-02-08T23:17:03.508259841Z" level=info msg="StartContainer for \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\" returns successfully" Feb 8 23:17:03.511749 env[1331]: time="2024-02-08T23:17:03.505018927Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeabe1d2a_bed6_49db_bd12_ea72995180a2.slice/cri-containerd-28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8.scope/memory.events\": no such file or directory" Feb 8 23:17:03.886423 env[1331]: time="2024-02-08T23:17:03.886370121Z" level=info msg="shim disconnected" id=28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8 Feb 8 23:17:03.887060 env[1331]: time="2024-02-08T23:17:03.887034024Z" level=warning msg="cleaning up after shim disconnected" id=28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8 namespace=k8s.io Feb 8 23:17:03.887185 env[1331]: time="2024-02-08T23:17:03.887168724Z" level=info msg="cleaning up dead shim" Feb 8 23:17:03.907786 env[1331]: time="2024-02-08T23:17:03.907752516Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:03Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2961 runtime=io.containerd.runc.v2\n" Feb 8 23:17:04.101621 env[1331]: time="2024-02-08T23:17:04.101572069Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:17:04.107995 env[1331]: time="2024-02-08T23:17:04.107953497Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:17:04.110884 env[1331]: time="2024-02-08T23:17:04.110851509Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:17:04.111319 env[1331]: time="2024-02-08T23:17:04.111286211Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:17:04.114768 env[1331]: time="2024-02-08T23:17:04.114739126Z" level=info msg="CreateContainer within sandbox \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:17:04.152742 env[1331]: time="2024-02-08T23:17:04.152631092Z" level=info msg="CreateContainer within sandbox \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\"" Feb 8 23:17:04.153621 env[1331]: time="2024-02-08T23:17:04.153577796Z" level=info msg="StartContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\"" Feb 8 23:17:04.171288 systemd[1]: Started cri-containerd-16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd.scope. Feb 8 23:17:04.210565 env[1331]: time="2024-02-08T23:17:04.210518944Z" level=info msg="StartContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" returns successfully" Feb 8 23:17:04.348851 env[1331]: time="2024-02-08T23:17:04.348795347Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:17:04.386238 env[1331]: time="2024-02-08T23:17:04.386190511Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\"" Feb 8 23:17:04.386794 env[1331]: time="2024-02-08T23:17:04.386761213Z" level=info msg="StartContainer for \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\"" Feb 8 23:17:04.421019 systemd[1]: Started cri-containerd-43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887.scope. Feb 8 23:17:04.501439 kubelet[2432]: I0208 23:17:04.501395 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-ww58j" podStartSLOduration=2.840215282 podCreationTimestamp="2024-02-08 23:16:48 +0000 UTC" firstStartedPulling="2024-02-08 23:16:50.450451481 +0000 UTC m=+18.364928640" lastFinishedPulling="2024-02-08 23:17:04.111584812 +0000 UTC m=+32.026061871" observedRunningTime="2024-02-08 23:17:04.499856507 +0000 UTC m=+32.414333566" watchObservedRunningTime="2024-02-08 23:17:04.501348513 +0000 UTC m=+32.415825672" Feb 8 23:17:04.503380 systemd[1]: cri-containerd-43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887.scope: Deactivated successfully. Feb 8 23:17:04.507547 env[1331]: time="2024-02-08T23:17:04.507501940Z" level=info msg="StartContainer for \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\" returns successfully" Feb 8 23:17:04.698299 env[1331]: time="2024-02-08T23:17:04.698164772Z" level=info msg="shim disconnected" id=43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887 Feb 8 23:17:04.698593 env[1331]: time="2024-02-08T23:17:04.698565974Z" level=warning msg="cleaning up after shim disconnected" id=43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887 namespace=k8s.io Feb 8 23:17:04.698684 env[1331]: time="2024-02-08T23:17:04.698667274Z" level=info msg="cleaning up dead shim" Feb 8 23:17:04.714459 env[1331]: time="2024-02-08T23:17:04.714412443Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:17:04Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3051 runtime=io.containerd.runc.v2\n" Feb 8 23:17:05.355231 env[1331]: time="2024-02-08T23:17:05.355176611Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:17:05.375893 systemd[1]: run-containerd-runc-k8s.io-43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887-runc.4STNvy.mount: Deactivated successfully. Feb 8 23:17:05.376029 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887-rootfs.mount: Deactivated successfully. Feb 8 23:17:05.397331 env[1331]: time="2024-02-08T23:17:05.397275292Z" level=info msg="CreateContainer within sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\"" Feb 8 23:17:05.399604 env[1331]: time="2024-02-08T23:17:05.399566502Z" level=info msg="StartContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\"" Feb 8 23:17:05.424583 systemd[1]: Started cri-containerd-397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6.scope. Feb 8 23:17:05.465050 env[1331]: time="2024-02-08T23:17:05.464877182Z" level=info msg="StartContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" returns successfully" Feb 8 23:17:05.574897 kubelet[2432]: I0208 23:17:05.573702 2432 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:17:05.608195 kubelet[2432]: I0208 23:17:05.608070 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:17:05.612085 kubelet[2432]: I0208 23:17:05.612061 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:17:05.615498 systemd[1]: Created slice kubepods-burstable-pod696de8c9_4b5f_401b_b050_ef9824a769b0.slice. Feb 8 23:17:05.622368 systemd[1]: Created slice kubepods-burstable-podf37c7a21_7bb5_4190_894b_ee540617e668.slice. Feb 8 23:17:05.732921 kubelet[2432]: I0208 23:17:05.732880 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/696de8c9-4b5f-401b-b050-ef9824a769b0-config-volume\") pod \"coredns-5d78c9869d-xql9c\" (UID: \"696de8c9-4b5f-401b-b050-ef9824a769b0\") " pod="kube-system/coredns-5d78c9869d-xql9c" Feb 8 23:17:05.733100 kubelet[2432]: I0208 23:17:05.732950 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f37c7a21-7bb5-4190-894b-ee540617e668-config-volume\") pod \"coredns-5d78c9869d-2dzqx\" (UID: \"f37c7a21-7bb5-4190-894b-ee540617e668\") " pod="kube-system/coredns-5d78c9869d-2dzqx" Feb 8 23:17:05.733100 kubelet[2432]: I0208 23:17:05.732994 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdvrs\" (UniqueName: \"kubernetes.io/projected/696de8c9-4b5f-401b-b050-ef9824a769b0-kube-api-access-sdvrs\") pod \"coredns-5d78c9869d-xql9c\" (UID: \"696de8c9-4b5f-401b-b050-ef9824a769b0\") " pod="kube-system/coredns-5d78c9869d-xql9c" Feb 8 23:17:05.733100 kubelet[2432]: I0208 23:17:05.733021 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwkj2\" (UniqueName: \"kubernetes.io/projected/f37c7a21-7bb5-4190-894b-ee540617e668-kube-api-access-zwkj2\") pod \"coredns-5d78c9869d-2dzqx\" (UID: \"f37c7a21-7bb5-4190-894b-ee540617e668\") " pod="kube-system/coredns-5d78c9869d-2dzqx" Feb 8 23:17:05.919317 env[1331]: time="2024-02-08T23:17:05.919197029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xql9c,Uid:696de8c9-4b5f-401b-b050-ef9824a769b0,Namespace:kube-system,Attempt:0,}" Feb 8 23:17:05.929549 env[1331]: time="2024-02-08T23:17:05.929509174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2dzqx,Uid:f37c7a21-7bb5-4190-894b-ee540617e668,Namespace:kube-system,Attempt:0,}" Feb 8 23:17:06.389527 kubelet[2432]: I0208 23:17:06.389486 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-7q57n" podStartSLOduration=10.146189403 podCreationTimestamp="2024-02-08 23:16:48 +0000 UTC" firstStartedPulling="2024-02-08 23:16:50.085842192 +0000 UTC m=+18.000319251" lastFinishedPulling="2024-02-08 23:16:58.329088806 +0000 UTC m=+26.243565965" observedRunningTime="2024-02-08 23:17:06.388160811 +0000 UTC m=+34.302637970" watchObservedRunningTime="2024-02-08 23:17:06.389436117 +0000 UTC m=+34.303913176" Feb 8 23:17:07.961607 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:17:07.961713 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:17:07.962185 systemd-networkd[1484]: cilium_host: Link UP Feb 8 23:17:07.965437 systemd-networkd[1484]: cilium_net: Link UP Feb 8 23:17:07.966212 systemd-networkd[1484]: cilium_net: Gained carrier Feb 8 23:17:07.966407 systemd-networkd[1484]: cilium_host: Gained carrier Feb 8 23:17:08.168126 systemd-networkd[1484]: cilium_net: Gained IPv6LL Feb 8 23:17:08.245748 systemd-networkd[1484]: cilium_vxlan: Link UP Feb 8 23:17:08.245760 systemd-networkd[1484]: cilium_vxlan: Gained carrier Feb 8 23:17:08.400191 systemd-networkd[1484]: cilium_host: Gained IPv6LL Feb 8 23:17:08.536978 kernel: NET: Registered PF_ALG protocol family Feb 8 23:17:09.401120 systemd-networkd[1484]: lxc_health: Link UP Feb 8 23:17:09.420004 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:17:09.420407 systemd-networkd[1484]: lxc_health: Gained carrier Feb 8 23:17:09.896170 systemd-networkd[1484]: cilium_vxlan: Gained IPv6LL Feb 8 23:17:10.007493 systemd-networkd[1484]: lxc6c66103f3e16: Link UP Feb 8 23:17:10.014972 kernel: eth0: renamed from tmp577ab Feb 8 23:17:10.023498 systemd-networkd[1484]: lxc6c66103f3e16: Gained carrier Feb 8 23:17:10.023990 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc6c66103f3e16: link becomes ready Feb 8 23:17:10.030068 systemd-networkd[1484]: lxc76cd30cefde5: Link UP Feb 8 23:17:10.044986 kernel: eth0: renamed from tmpd4625 Feb 8 23:17:10.053009 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc76cd30cefde5: link becomes ready Feb 8 23:17:10.057431 systemd-networkd[1484]: lxc76cd30cefde5: Gained carrier Feb 8 23:17:10.600140 systemd-networkd[1484]: lxc_health: Gained IPv6LL Feb 8 23:17:11.560100 systemd-networkd[1484]: lxc76cd30cefde5: Gained IPv6LL Feb 8 23:17:11.752101 systemd-networkd[1484]: lxc6c66103f3e16: Gained IPv6LL Feb 8 23:17:13.680454 env[1331]: time="2024-02-08T23:17:13.680381297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:17:13.680902 env[1331]: time="2024-02-08T23:17:13.680468397Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:17:13.680902 env[1331]: time="2024-02-08T23:17:13.680506797Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:17:13.680902 env[1331]: time="2024-02-08T23:17:13.680806398Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:17:13.680902 env[1331]: time="2024-02-08T23:17:13.680868299Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:17:13.680902 env[1331]: time="2024-02-08T23:17:13.680892999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:17:13.681146 env[1331]: time="2024-02-08T23:17:13.680991999Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4625765e6b1d5f30458fba28c24c9bbb769b369204b15dc62567fcf277b9131 pid=3610 runtime=io.containerd.runc.v2 Feb 8 23:17:13.681440 env[1331]: time="2024-02-08T23:17:13.681391301Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/577ab181b5e0c0207a09d386c3c9958a597d8f7c9fbef32d357102160967c54f pid=3603 runtime=io.containerd.runc.v2 Feb 8 23:17:13.709984 systemd[1]: Started cri-containerd-577ab181b5e0c0207a09d386c3c9958a597d8f7c9fbef32d357102160967c54f.scope. Feb 8 23:17:13.726064 systemd[1]: Started cri-containerd-d4625765e6b1d5f30458fba28c24c9bbb769b369204b15dc62567fcf277b9131.scope. Feb 8 23:17:13.792263 env[1331]: time="2024-02-08T23:17:13.791675615Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-xql9c,Uid:696de8c9-4b5f-401b-b050-ef9824a769b0,Namespace:kube-system,Attempt:0,} returns sandbox id \"577ab181b5e0c0207a09d386c3c9958a597d8f7c9fbef32d357102160967c54f\"" Feb 8 23:17:13.804132 env[1331]: time="2024-02-08T23:17:13.804089161Z" level=info msg="CreateContainer within sandbox \"577ab181b5e0c0207a09d386c3c9958a597d8f7c9fbef32d357102160967c54f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:17:13.845966 env[1331]: time="2024-02-08T23:17:13.844552513Z" level=info msg="CreateContainer within sandbox \"577ab181b5e0c0207a09d386c3c9958a597d8f7c9fbef32d357102160967c54f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"161ffd4d9c64e08390a9f1a42122c00860c5a394c97ae486958f096858579c01\"" Feb 8 23:17:13.845966 env[1331]: time="2024-02-08T23:17:13.845370917Z" level=info msg="StartContainer for \"161ffd4d9c64e08390a9f1a42122c00860c5a394c97ae486958f096858579c01\"" Feb 8 23:17:13.847641 env[1331]: time="2024-02-08T23:17:13.847601125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-2dzqx,Uid:f37c7a21-7bb5-4190-894b-ee540617e668,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4625765e6b1d5f30458fba28c24c9bbb769b369204b15dc62567fcf277b9131\"" Feb 8 23:17:13.855288 env[1331]: time="2024-02-08T23:17:13.855258954Z" level=info msg="CreateContainer within sandbox \"d4625765e6b1d5f30458fba28c24c9bbb769b369204b15dc62567fcf277b9131\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 8 23:17:13.872223 systemd[1]: Started cri-containerd-161ffd4d9c64e08390a9f1a42122c00860c5a394c97ae486958f096858579c01.scope. Feb 8 23:17:13.895644 env[1331]: time="2024-02-08T23:17:13.895586905Z" level=info msg="CreateContainer within sandbox \"d4625765e6b1d5f30458fba28c24c9bbb769b369204b15dc62567fcf277b9131\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"22f5db02efabbeb34d0d758dac4fcb6bddc2ac34347f6f2fc0d18484ddb76cbc\"" Feb 8 23:17:13.896482 env[1331]: time="2024-02-08T23:17:13.896449108Z" level=info msg="StartContainer for \"22f5db02efabbeb34d0d758dac4fcb6bddc2ac34347f6f2fc0d18484ddb76cbc\"" Feb 8 23:17:13.932878 systemd[1]: Started cri-containerd-22f5db02efabbeb34d0d758dac4fcb6bddc2ac34347f6f2fc0d18484ddb76cbc.scope. Feb 8 23:17:13.940799 env[1331]: time="2024-02-08T23:17:13.940749275Z" level=info msg="StartContainer for \"161ffd4d9c64e08390a9f1a42122c00860c5a394c97ae486958f096858579c01\" returns successfully" Feb 8 23:17:13.975594 env[1331]: time="2024-02-08T23:17:13.975544606Z" level=info msg="StartContainer for \"22f5db02efabbeb34d0d758dac4fcb6bddc2ac34347f6f2fc0d18484ddb76cbc\" returns successfully" Feb 8 23:17:14.400667 kubelet[2432]: I0208 23:17:14.400627 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-xql9c" podStartSLOduration=26.400584579 podCreationTimestamp="2024-02-08 23:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:14.399371775 +0000 UTC m=+42.313848934" watchObservedRunningTime="2024-02-08 23:17:14.400584579 +0000 UTC m=+42.315061738" Feb 8 23:17:14.401265 kubelet[2432]: I0208 23:17:14.400725 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-2dzqx" podStartSLOduration=26.40070478 podCreationTimestamp="2024-02-08 23:16:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:17:14.386598628 +0000 UTC m=+42.301075687" watchObservedRunningTime="2024-02-08 23:17:14.40070478 +0000 UTC m=+42.315181839" Feb 8 23:17:14.688116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2503711690.mount: Deactivated successfully. Feb 8 23:19:20.718987 update_engine[1318]: I0208 23:19:20.718850 1318 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 8 23:19:20.718987 update_engine[1318]: I0208 23:19:20.718896 1318 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 8 23:19:20.719640 update_engine[1318]: I0208 23:19:20.719084 1318 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 8 23:19:20.719701 update_engine[1318]: I0208 23:19:20.719657 1318 omaha_request_params.cc:62] Current group set to lts Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.719862 1318 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.719877 1318 update_attempter.cc:643] Scheduling an action processor start. Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.719926 1318 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.719991 1318 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.720062 1318 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.720069 1318 omaha_request_action.cc:271] Request: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: Feb 8 23:19:20.720264 update_engine[1318]: I0208 23:19:20.720074 1318 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:19:20.720901 locksmithd[1409]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 8 23:19:20.721623 update_engine[1318]: I0208 23:19:20.721473 1318 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:19:20.721747 update_engine[1318]: I0208 23:19:20.721724 1318 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:19:20.792861 update_engine[1318]: E0208 23:19:20.792819 1318 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:19:20.793030 update_engine[1318]: I0208 23:19:20.792999 1318 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 8 23:19:30.648101 update_engine[1318]: I0208 23:19:30.648038 1318 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:19:30.648687 update_engine[1318]: I0208 23:19:30.648342 1318 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:19:30.648687 update_engine[1318]: I0208 23:19:30.648586 1318 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:19:30.669800 update_engine[1318]: E0208 23:19:30.669761 1318 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:19:30.669982 update_engine[1318]: I0208 23:19:30.669888 1318 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 8 23:19:40.652066 update_engine[1318]: I0208 23:19:40.652001 1318 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:19:40.652516 update_engine[1318]: I0208 23:19:40.652278 1318 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:19:40.652516 update_engine[1318]: I0208 23:19:40.652493 1318 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:19:40.677717 update_engine[1318]: E0208 23:19:40.677680 1318 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:19:40.677864 update_engine[1318]: I0208 23:19:40.677806 1318 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 8 23:19:50.644186 update_engine[1318]: I0208 23:19:50.644124 1318 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:19:50.648559 update_engine[1318]: I0208 23:19:50.644420 1318 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:19:50.648559 update_engine[1318]: I0208 23:19:50.644683 1318 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:19:50.652181 update_engine[1318]: E0208 23:19:50.652158 1318 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:19:50.652284 update_engine[1318]: I0208 23:19:50.652256 1318 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 8 23:19:50.652284 update_engine[1318]: I0208 23:19:50.652264 1318 omaha_request_action.cc:621] Omaha request response: Feb 8 23:19:50.652368 update_engine[1318]: E0208 23:19:50.652340 1318 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 8 23:19:50.652368 update_engine[1318]: I0208 23:19:50.652354 1318 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 8 23:19:50.652368 update_engine[1318]: I0208 23:19:50.652359 1318 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:19:50.652368 update_engine[1318]: I0208 23:19:50.652364 1318 update_attempter.cc:306] Processing Done. Feb 8 23:19:50.652516 update_engine[1318]: E0208 23:19:50.652379 1318 update_attempter.cc:619] Update failed. Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652383 1318 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652390 1318 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652395 1318 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652477 1318 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652498 1318 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652503 1318 omaha_request_action.cc:271] Request: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: Feb 8 23:19:50.652516 update_engine[1318]: I0208 23:19:50.652508 1318 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 8 23:19:50.653049 update_engine[1318]: I0208 23:19:50.652647 1318 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 8 23:19:50.653049 update_engine[1318]: I0208 23:19:50.652783 1318 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 8 23:19:50.653155 locksmithd[1409]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 8 23:19:50.667822 update_engine[1318]: E0208 23:19:50.667588 1318 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667691 1318 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667702 1318 omaha_request_action.cc:621] Omaha request response: Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667708 1318 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667713 1318 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667717 1318 update_attempter.cc:306] Processing Done. Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667723 1318 update_attempter.cc:310] Error event sent. Feb 8 23:19:50.667822 update_engine[1318]: I0208 23:19:50.667732 1318 update_check_scheduler.cc:74] Next update check in 41m16s Feb 8 23:19:50.668205 locksmithd[1409]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 8 23:20:32.095876 systemd[1]: Started sshd@5-10.200.8.40:22-10.200.12.6:47516.service. Feb 8 23:20:32.720301 sshd[3789]: Accepted publickey for core from 10.200.12.6 port 47516 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:32.721899 sshd[3789]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:32.728003 systemd-logind[1316]: New session 8 of user core. Feb 8 23:20:32.728635 systemd[1]: Started session-8.scope. Feb 8 23:20:33.322230 sshd[3789]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:33.325450 systemd[1]: sshd@5-10.200.8.40:22-10.200.12.6:47516.service: Deactivated successfully. Feb 8 23:20:33.326606 systemd[1]: session-8.scope: Deactivated successfully. Feb 8 23:20:33.327458 systemd-logind[1316]: Session 8 logged out. Waiting for processes to exit. Feb 8 23:20:33.328452 systemd-logind[1316]: Removed session 8. Feb 8 23:20:38.427176 systemd[1]: Started sshd@6-10.200.8.40:22-10.200.12.6:53360.service. Feb 8 23:20:39.048874 sshd[3816]: Accepted publickey for core from 10.200.12.6 port 53360 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:39.050551 sshd[3816]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:39.056446 systemd-logind[1316]: New session 9 of user core. Feb 8 23:20:39.057076 systemd[1]: Started session-9.scope. Feb 8 23:20:39.545671 sshd[3816]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:39.548865 systemd-logind[1316]: Session 9 logged out. Waiting for processes to exit. Feb 8 23:20:39.549081 systemd[1]: sshd@6-10.200.8.40:22-10.200.12.6:53360.service: Deactivated successfully. Feb 8 23:20:39.550027 systemd[1]: session-9.scope: Deactivated successfully. Feb 8 23:20:39.550854 systemd-logind[1316]: Removed session 9. Feb 8 23:20:44.650419 systemd[1]: Started sshd@7-10.200.8.40:22-10.200.12.6:53370.service. Feb 8 23:20:45.273050 sshd[3828]: Accepted publickey for core from 10.200.12.6 port 53370 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:45.274520 sshd[3828]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:45.279448 systemd[1]: Started session-10.scope. Feb 8 23:20:45.280086 systemd-logind[1316]: New session 10 of user core. Feb 8 23:20:45.760071 sshd[3828]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:45.763411 systemd[1]: sshd@7-10.200.8.40:22-10.200.12.6:53370.service: Deactivated successfully. Feb 8 23:20:45.764575 systemd[1]: session-10.scope: Deactivated successfully. Feb 8 23:20:45.765469 systemd-logind[1316]: Session 10 logged out. Waiting for processes to exit. Feb 8 23:20:45.766380 systemd-logind[1316]: Removed session 10. Feb 8 23:20:50.866093 systemd[1]: Started sshd@8-10.200.8.40:22-10.200.12.6:35054.service. Feb 8 23:20:51.507684 sshd[3843]: Accepted publickey for core from 10.200.12.6 port 35054 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:51.509156 sshd[3843]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:51.514129 systemd-logind[1316]: New session 11 of user core. Feb 8 23:20:51.514638 systemd[1]: Started session-11.scope. Feb 8 23:20:51.997333 sshd[3843]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:52.001606 systemd[1]: sshd@8-10.200.8.40:22-10.200.12.6:35054.service: Deactivated successfully. Feb 8 23:20:52.002563 systemd[1]: session-11.scope: Deactivated successfully. Feb 8 23:20:52.003441 systemd-logind[1316]: Session 11 logged out. Waiting for processes to exit. Feb 8 23:20:52.004304 systemd-logind[1316]: Removed session 11. Feb 8 23:20:52.103364 systemd[1]: Started sshd@9-10.200.8.40:22-10.200.12.6:35060.service. Feb 8 23:20:52.723922 sshd[3856]: Accepted publickey for core from 10.200.12.6 port 35060 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:52.725559 sshd[3856]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:52.730012 systemd-logind[1316]: New session 12 of user core. Feb 8 23:20:52.730432 systemd[1]: Started session-12.scope. Feb 8 23:20:53.818716 sshd[3856]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:53.822179 systemd-logind[1316]: Session 12 logged out. Waiting for processes to exit. Feb 8 23:20:53.822619 systemd[1]: sshd@9-10.200.8.40:22-10.200.12.6:35060.service: Deactivated successfully. Feb 8 23:20:53.823536 systemd[1]: session-12.scope: Deactivated successfully. Feb 8 23:20:53.824572 systemd-logind[1316]: Removed session 12. Feb 8 23:20:53.921924 systemd[1]: Started sshd@10-10.200.8.40:22-10.200.12.6:35064.service. Feb 8 23:20:54.536871 sshd[3866]: Accepted publickey for core from 10.200.12.6 port 35064 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:20:54.538561 sshd[3866]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:20:54.543016 systemd-logind[1316]: New session 13 of user core. Feb 8 23:20:54.545283 systemd[1]: Started session-13.scope. Feb 8 23:20:55.024210 sshd[3866]: pam_unix(sshd:session): session closed for user core Feb 8 23:20:55.027009 systemd[1]: sshd@10-10.200.8.40:22-10.200.12.6:35064.service: Deactivated successfully. Feb 8 23:20:55.027979 systemd[1]: session-13.scope: Deactivated successfully. Feb 8 23:20:55.028662 systemd-logind[1316]: Session 13 logged out. Waiting for processes to exit. Feb 8 23:20:55.029611 systemd-logind[1316]: Removed session 13. Feb 8 23:21:00.145496 systemd[1]: Started sshd@11-10.200.8.40:22-10.200.12.6:36534.service. Feb 8 23:21:00.770472 sshd[3879]: Accepted publickey for core from 10.200.12.6 port 36534 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:00.771844 sshd[3879]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:00.776685 systemd[1]: Started session-14.scope. Feb 8 23:21:00.777310 systemd-logind[1316]: New session 14 of user core. Feb 8 23:21:01.287547 sshd[3879]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:01.290960 systemd[1]: sshd@11-10.200.8.40:22-10.200.12.6:36534.service: Deactivated successfully. Feb 8 23:21:01.292102 systemd[1]: session-14.scope: Deactivated successfully. Feb 8 23:21:01.292994 systemd-logind[1316]: Session 14 logged out. Waiting for processes to exit. Feb 8 23:21:01.293787 systemd-logind[1316]: Removed session 14. Feb 8 23:21:06.394466 systemd[1]: Started sshd@12-10.200.8.40:22-10.200.12.6:36536.service. Feb 8 23:21:07.021589 sshd[3891]: Accepted publickey for core from 10.200.12.6 port 36536 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:07.022186 sshd[3891]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:07.027045 systemd-logind[1316]: New session 15 of user core. Feb 8 23:21:07.027189 systemd[1]: Started session-15.scope. Feb 8 23:21:07.516151 sshd[3891]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:07.518789 systemd[1]: sshd@12-10.200.8.40:22-10.200.12.6:36536.service: Deactivated successfully. Feb 8 23:21:07.520063 systemd[1]: session-15.scope: Deactivated successfully. Feb 8 23:21:07.520092 systemd-logind[1316]: Session 15 logged out. Waiting for processes to exit. Feb 8 23:21:07.521164 systemd-logind[1316]: Removed session 15. Feb 8 23:21:07.618258 systemd[1]: Started sshd@13-10.200.8.40:22-10.200.12.6:46806.service. Feb 8 23:21:08.234900 sshd[3903]: Accepted publickey for core from 10.200.12.6 port 46806 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:08.236267 sshd[3903]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:08.241421 systemd[1]: Started session-16.scope. Feb 8 23:21:08.241627 systemd-logind[1316]: New session 16 of user core. Feb 8 23:21:08.911921 sshd[3903]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:08.915438 systemd[1]: sshd@13-10.200.8.40:22-10.200.12.6:46806.service: Deactivated successfully. Feb 8 23:21:08.916580 systemd[1]: session-16.scope: Deactivated successfully. Feb 8 23:21:08.917438 systemd-logind[1316]: Session 16 logged out. Waiting for processes to exit. Feb 8 23:21:08.918460 systemd-logind[1316]: Removed session 16. Feb 8 23:21:09.020410 systemd[1]: Started sshd@14-10.200.8.40:22-10.200.12.6:46820.service. Feb 8 23:21:09.678823 sshd[3912]: Accepted publickey for core from 10.200.12.6 port 46820 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:09.680263 sshd[3912]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:09.685156 systemd[1]: Started session-17.scope. Feb 8 23:21:09.685754 systemd-logind[1316]: New session 17 of user core. Feb 8 23:21:11.309645 sshd[3912]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:11.312862 systemd[1]: sshd@14-10.200.8.40:22-10.200.12.6:46820.service: Deactivated successfully. Feb 8 23:21:11.314385 systemd[1]: session-17.scope: Deactivated successfully. Feb 8 23:21:11.314452 systemd-logind[1316]: Session 17 logged out. Waiting for processes to exit. Feb 8 23:21:11.315629 systemd-logind[1316]: Removed session 17. Feb 8 23:21:11.412853 systemd[1]: Started sshd@15-10.200.8.40:22-10.200.12.6:46822.service. Feb 8 23:21:12.033608 sshd[3929]: Accepted publickey for core from 10.200.12.6 port 46822 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:12.035220 sshd[3929]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:12.040009 systemd-logind[1316]: New session 18 of user core. Feb 8 23:21:12.040492 systemd[1]: Started session-18.scope. Feb 8 23:21:12.732754 sshd[3929]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:12.735929 systemd[1]: sshd@15-10.200.8.40:22-10.200.12.6:46822.service: Deactivated successfully. Feb 8 23:21:12.737193 systemd[1]: session-18.scope: Deactivated successfully. Feb 8 23:21:12.737214 systemd-logind[1316]: Session 18 logged out. Waiting for processes to exit. Feb 8 23:21:12.738340 systemd-logind[1316]: Removed session 18. Feb 8 23:21:12.838435 systemd[1]: Started sshd@16-10.200.8.40:22-10.200.12.6:46828.service. Feb 8 23:21:13.461839 sshd[3940]: Accepted publickey for core from 10.200.12.6 port 46828 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:13.463364 sshd[3940]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:13.468175 systemd-logind[1316]: New session 19 of user core. Feb 8 23:21:13.468658 systemd[1]: Started session-19.scope. Feb 8 23:21:13.954628 sshd[3940]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:13.957954 systemd[1]: sshd@16-10.200.8.40:22-10.200.12.6:46828.service: Deactivated successfully. Feb 8 23:21:13.959100 systemd[1]: session-19.scope: Deactivated successfully. Feb 8 23:21:13.959928 systemd-logind[1316]: Session 19 logged out. Waiting for processes to exit. Feb 8 23:21:13.960981 systemd-logind[1316]: Removed session 19. Feb 8 23:21:19.067023 systemd[1]: Started sshd@17-10.200.8.40:22-10.200.12.6:40582.service. Feb 8 23:21:19.683176 sshd[3955]: Accepted publickey for core from 10.200.12.6 port 40582 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:19.684736 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:19.689421 systemd-logind[1316]: New session 20 of user core. Feb 8 23:21:19.690226 systemd[1]: Started session-20.scope. Feb 8 23:21:20.173989 sshd[3955]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:20.177319 systemd[1]: sshd@17-10.200.8.40:22-10.200.12.6:40582.service: Deactivated successfully. Feb 8 23:21:20.178447 systemd[1]: session-20.scope: Deactivated successfully. Feb 8 23:21:20.179375 systemd-logind[1316]: Session 20 logged out. Waiting for processes to exit. Feb 8 23:21:20.180438 systemd-logind[1316]: Removed session 20. Feb 8 23:21:25.280976 systemd[1]: Started sshd@18-10.200.8.40:22-10.200.12.6:40594.service. Feb 8 23:21:25.906638 sshd[3969]: Accepted publickey for core from 10.200.12.6 port 40594 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:25.908080 sshd[3969]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:25.913866 systemd-logind[1316]: New session 21 of user core. Feb 8 23:21:25.913901 systemd[1]: Started session-21.scope. Feb 8 23:21:26.398413 sshd[3969]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:26.401835 systemd[1]: sshd@18-10.200.8.40:22-10.200.12.6:40594.service: Deactivated successfully. Feb 8 23:21:26.403037 systemd[1]: session-21.scope: Deactivated successfully. Feb 8 23:21:26.403889 systemd-logind[1316]: Session 21 logged out. Waiting for processes to exit. Feb 8 23:21:26.404867 systemd-logind[1316]: Removed session 21. Feb 8 23:21:31.504449 systemd[1]: Started sshd@19-10.200.8.40:22-10.200.12.6:60776.service. Feb 8 23:21:32.134614 sshd[3981]: Accepted publickey for core from 10.200.12.6 port 60776 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:32.136322 sshd[3981]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:32.141803 systemd-logind[1316]: New session 22 of user core. Feb 8 23:21:32.142678 systemd[1]: Started session-22.scope. Feb 8 23:21:32.629587 sshd[3981]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:32.632931 systemd[1]: sshd@19-10.200.8.40:22-10.200.12.6:60776.service: Deactivated successfully. Feb 8 23:21:32.634057 systemd[1]: session-22.scope: Deactivated successfully. Feb 8 23:21:32.634718 systemd-logind[1316]: Session 22 logged out. Waiting for processes to exit. Feb 8 23:21:32.635532 systemd-logind[1316]: Removed session 22. Feb 8 23:21:32.734536 systemd[1]: Started sshd@20-10.200.8.40:22-10.200.12.6:60778.service. Feb 8 23:21:33.349929 sshd[3995]: Accepted publickey for core from 10.200.12.6 port 60778 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:33.351432 sshd[3995]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:33.356733 systemd[1]: Started session-23.scope. Feb 8 23:21:33.357372 systemd-logind[1316]: New session 23 of user core. Feb 8 23:21:35.052308 systemd[1]: run-containerd-runc-k8s.io-397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6-runc.50qtd0.mount: Deactivated successfully. Feb 8 23:21:35.055045 env[1331]: time="2024-02-08T23:21:35.054998790Z" level=info msg="StopContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" with timeout 30 (s)" Feb 8 23:21:35.055530 env[1331]: time="2024-02-08T23:21:35.055497094Z" level=info msg="Stop container \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" with signal terminated" Feb 8 23:21:35.075227 systemd[1]: cri-containerd-16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd.scope: Deactivated successfully. Feb 8 23:21:35.085762 env[1331]: time="2024-02-08T23:21:35.085677231Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:21:35.094331 env[1331]: time="2024-02-08T23:21:35.094288599Z" level=info msg="StopContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" with timeout 1 (s)" Feb 8 23:21:35.094648 env[1331]: time="2024-02-08T23:21:35.094618802Z" level=info msg="Stop container \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" with signal terminated" Feb 8 23:21:35.104009 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd-rootfs.mount: Deactivated successfully. Feb 8 23:21:35.105695 systemd-networkd[1484]: lxc_health: Link DOWN Feb 8 23:21:35.105700 systemd-networkd[1484]: lxc_health: Lost carrier Feb 8 23:21:35.128394 systemd[1]: cri-containerd-397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6.scope: Deactivated successfully. Feb 8 23:21:35.128669 systemd[1]: cri-containerd-397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6.scope: Consumed 7.307s CPU time. Feb 8 23:21:35.148155 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6-rootfs.mount: Deactivated successfully. Feb 8 23:21:35.169676 env[1331]: time="2024-02-08T23:21:35.169624292Z" level=info msg="shim disconnected" id=16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd Feb 8 23:21:35.169865 env[1331]: time="2024-02-08T23:21:35.169677392Z" level=warning msg="cleaning up after shim disconnected" id=16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd namespace=k8s.io Feb 8 23:21:35.169865 env[1331]: time="2024-02-08T23:21:35.169689292Z" level=info msg="cleaning up dead shim" Feb 8 23:21:35.170078 env[1331]: time="2024-02-08T23:21:35.170043695Z" level=info msg="shim disconnected" id=397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6 Feb 8 23:21:35.170157 env[1331]: time="2024-02-08T23:21:35.170079095Z" level=warning msg="cleaning up after shim disconnected" id=397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6 namespace=k8s.io Feb 8 23:21:35.170157 env[1331]: time="2024-02-08T23:21:35.170090596Z" level=info msg="cleaning up dead shim" Feb 8 23:21:35.180135 env[1331]: time="2024-02-08T23:21:35.180106174Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4065 runtime=io.containerd.runc.v2\n" Feb 8 23:21:35.182801 env[1331]: time="2024-02-08T23:21:35.182763495Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4066 runtime=io.containerd.runc.v2\n" Feb 8 23:21:35.184511 env[1331]: time="2024-02-08T23:21:35.184480109Z" level=info msg="StopContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" returns successfully" Feb 8 23:21:35.185222 env[1331]: time="2024-02-08T23:21:35.185195614Z" level=info msg="StopPodSandbox for \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\"" Feb 8 23:21:35.188068 env[1331]: time="2024-02-08T23:21:35.185261215Z" level=info msg="Container to stop \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.189872 env[1331]: time="2024-02-08T23:21:35.189781751Z" level=info msg="StopContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" returns successfully" Feb 8 23:21:35.192198 env[1331]: time="2024-02-08T23:21:35.192170769Z" level=info msg="StopPodSandbox for \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\"" Feb 8 23:21:35.193451 env[1331]: time="2024-02-08T23:21:35.193343079Z" level=info msg="Container to stop \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.193677 env[1331]: time="2024-02-08T23:21:35.193637881Z" level=info msg="Container to stop \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.193838 env[1331]: time="2024-02-08T23:21:35.193796682Z" level=info msg="Container to stop \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.193971 env[1331]: time="2024-02-08T23:21:35.193928183Z" level=info msg="Container to stop \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.194791 env[1331]: time="2024-02-08T23:21:35.194765590Z" level=info msg="Container to stop \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:35.198296 systemd[1]: cri-containerd-f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19.scope: Deactivated successfully. Feb 8 23:21:35.209453 systemd[1]: cri-containerd-6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881.scope: Deactivated successfully. Feb 8 23:21:35.243155 env[1331]: time="2024-02-08T23:21:35.243106170Z" level=info msg="shim disconnected" id=f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19 Feb 8 23:21:35.243364 env[1331]: time="2024-02-08T23:21:35.243157471Z" level=warning msg="cleaning up after shim disconnected" id=f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19 namespace=k8s.io Feb 8 23:21:35.243364 env[1331]: time="2024-02-08T23:21:35.243169571Z" level=info msg="cleaning up dead shim" Feb 8 23:21:35.243364 env[1331]: time="2024-02-08T23:21:35.243335872Z" level=info msg="shim disconnected" id=6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881 Feb 8 23:21:35.243508 env[1331]: time="2024-02-08T23:21:35.243370172Z" level=warning msg="cleaning up after shim disconnected" id=6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881 namespace=k8s.io Feb 8 23:21:35.243508 env[1331]: time="2024-02-08T23:21:35.243380772Z" level=info msg="cleaning up dead shim" Feb 8 23:21:35.256317 env[1331]: time="2024-02-08T23:21:35.256273674Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4130 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:21:35Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" Feb 8 23:21:35.256815 env[1331]: time="2024-02-08T23:21:35.256781178Z" level=info msg="TearDown network for sandbox \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" successfully" Feb 8 23:21:35.256977 env[1331]: time="2024-02-08T23:21:35.256926979Z" level=info msg="StopPodSandbox for \"6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881\" returns successfully" Feb 8 23:21:35.257100 env[1331]: time="2024-02-08T23:21:35.257075780Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4129 runtime=io.containerd.runc.v2\n" Feb 8 23:21:35.257599 env[1331]: time="2024-02-08T23:21:35.257338582Z" level=info msg="TearDown network for sandbox \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\" successfully" Feb 8 23:21:35.257599 env[1331]: time="2024-02-08T23:21:35.257363182Z" level=info msg="StopPodSandbox for \"f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19\" returns successfully" Feb 8 23:21:35.391320 kubelet[2432]: I0208 23:21:35.391270 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-cgroup\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.391872 kubelet[2432]: I0208 23:21:35.391448 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-hubble-tls\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.391872 kubelet[2432]: I0208 23:21:35.391369 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.391539 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-bpf-maps\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.392157 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-etc-cni-netd\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.392214 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lvn64\" (UniqueName: \"kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64\") pod \"5b4d5374-490b-4a7f-a817-cec3120c47cf\" (UID: \"5b4d5374-490b-4a7f-a817-cec3120c47cf\") " Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.392250 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b4d5374-490b-4a7f-a817-cec3120c47cf-cilium-config-path\") pod \"5b4d5374-490b-4a7f-a817-cec3120c47cf\" (UID: \"5b4d5374-490b-4a7f-a817-cec3120c47cf\") " Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.392296 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-net\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392455 kubelet[2432]: I0208 23:21:35.392328 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-hostproc\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392374 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-xtables-lock\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392413 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabe1d2a-bed6-49db-bd12-ea72995180a2-clustermesh-secrets\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392456 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-kernel\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392489 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-run\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392516 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cni-path\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.392828 kubelet[2432]: I0208 23:21:35.392568 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jgbfb\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.393184 kubelet[2432]: I0208 23:21:35.392600 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-lib-modules\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.393184 kubelet[2432]: I0208 23:21:35.392654 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-config-path\") pod \"eabe1d2a-bed6-49db-bd12-ea72995180a2\" (UID: \"eabe1d2a-bed6-49db-bd12-ea72995180a2\") " Feb 8 23:21:35.393184 kubelet[2432]: I0208 23:21:35.392724 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-cgroup\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.393184 kubelet[2432]: W0208 23:21:35.393042 2432 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/eabe1d2a-bed6-49db-bd12-ea72995180a2/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:21:35.395966 kubelet[2432]: I0208 23:21:35.393988 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.395966 kubelet[2432]: I0208 23:21:35.394062 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.395966 kubelet[2432]: I0208 23:21:35.394091 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.395966 kubelet[2432]: W0208 23:21:35.394586 2432 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/5b4d5374-490b-4a7f-a817-cec3120c47cf/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:21:35.396665 kubelet[2432]: I0208 23:21:35.396637 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.397208 kubelet[2432]: I0208 23:21:35.396804 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.397333 kubelet[2432]: I0208 23:21:35.396826 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cni-path" (OuterVolumeSpecName: "cni-path") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.397511 kubelet[2432]: I0208 23:21:35.397486 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.397680 kubelet[2432]: I0208 23:21:35.397659 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-hostproc" (OuterVolumeSpecName: "hostproc") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.397864 kubelet[2432]: I0208 23:21:35.397842 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:35.399693 kubelet[2432]: I0208 23:21:35.399665 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b4d5374-490b-4a7f-a817-cec3120c47cf-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "5b4d5374-490b-4a7f-a817-cec3120c47cf" (UID: "5b4d5374-490b-4a7f-a817-cec3120c47cf"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:21:35.401716 kubelet[2432]: I0208 23:21:35.401691 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:35.402110 kubelet[2432]: I0208 23:21:35.402083 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:21:35.402629 kubelet[2432]: I0208 23:21:35.402605 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eabe1d2a-bed6-49db-bd12-ea72995180a2-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:35.404906 kubelet[2432]: I0208 23:21:35.404857 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb" (OuterVolumeSpecName: "kube-api-access-jgbfb") pod "eabe1d2a-bed6-49db-bd12-ea72995180a2" (UID: "eabe1d2a-bed6-49db-bd12-ea72995180a2"). InnerVolumeSpecName "kube-api-access-jgbfb". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:35.405383 kubelet[2432]: I0208 23:21:35.405358 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64" (OuterVolumeSpecName: "kube-api-access-lvn64") pod "5b4d5374-490b-4a7f-a817-cec3120c47cf" (UID: "5b4d5374-490b-4a7f-a817-cec3120c47cf"). InnerVolumeSpecName "kube-api-access-lvn64". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:35.493035 kubelet[2432]: I0208 23:21:35.492989 2432 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-hubble-tls\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493035 kubelet[2432]: I0208 23:21:35.493035 2432 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-bpf-maps\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493035 kubelet[2432]: I0208 23:21:35.493057 2432 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-etc-cni-netd\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493076 2432 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lvn64\" (UniqueName: \"kubernetes.io/projected/5b4d5374-490b-4a7f-a817-cec3120c47cf-kube-api-access-lvn64\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493094 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/5b4d5374-490b-4a7f-a817-cec3120c47cf-cilium-config-path\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493110 2432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-net\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493125 2432 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-hostproc\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493142 2432 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-xtables-lock\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493157 2432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493172 2432 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/eabe1d2a-bed6-49db-bd12-ea72995180a2-clustermesh-secrets\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493347 kubelet[2432]: I0208 23:21:35.493189 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-run\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493627 kubelet[2432]: I0208 23:21:35.493205 2432 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-cni-path\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493627 kubelet[2432]: I0208 23:21:35.493221 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/eabe1d2a-bed6-49db-bd12-ea72995180a2-cilium-config-path\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493627 kubelet[2432]: I0208 23:21:35.493237 2432 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jgbfb\" (UniqueName: \"kubernetes.io/projected/eabe1d2a-bed6-49db-bd12-ea72995180a2-kube-api-access-jgbfb\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.493627 kubelet[2432]: I0208 23:21:35.493257 2432 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/eabe1d2a-bed6-49db-bd12-ea72995180a2-lib-modules\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:35.914034 kubelet[2432]: I0208 23:21:35.914003 2432 scope.go:115] "RemoveContainer" containerID="397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6" Feb 8 23:21:35.919669 systemd[1]: Removed slice kubepods-burstable-podeabe1d2a_bed6_49db_bd12_ea72995180a2.slice. Feb 8 23:21:35.919835 systemd[1]: kubepods-burstable-podeabe1d2a_bed6_49db_bd12_ea72995180a2.slice: Consumed 7.413s CPU time. Feb 8 23:21:35.923171 env[1331]: time="2024-02-08T23:21:35.923135321Z" level=info msg="RemoveContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\"" Feb 8 23:21:35.927245 systemd[1]: Removed slice kubepods-besteffort-pod5b4d5374_490b_4a7f_a817_cec3120c47cf.slice. Feb 8 23:21:35.940143 env[1331]: time="2024-02-08T23:21:35.940107355Z" level=info msg="RemoveContainer for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" returns successfully" Feb 8 23:21:35.940920 kubelet[2432]: I0208 23:21:35.940899 2432 scope.go:115] "RemoveContainer" containerID="43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887" Feb 8 23:21:35.942805 env[1331]: time="2024-02-08T23:21:35.942772676Z" level=info msg="RemoveContainer for \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\"" Feb 8 23:21:35.955574 env[1331]: time="2024-02-08T23:21:35.955541676Z" level=info msg="RemoveContainer for \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\" returns successfully" Feb 8 23:21:35.955791 kubelet[2432]: I0208 23:21:35.955762 2432 scope.go:115] "RemoveContainer" containerID="28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8" Feb 8 23:21:35.957130 env[1331]: time="2024-02-08T23:21:35.956872287Z" level=info msg="RemoveContainer for \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\"" Feb 8 23:21:35.965098 env[1331]: time="2024-02-08T23:21:35.965064351Z" level=info msg="RemoveContainer for \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\" returns successfully" Feb 8 23:21:35.965285 kubelet[2432]: I0208 23:21:35.965265 2432 scope.go:115] "RemoveContainer" containerID="622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723" Feb 8 23:21:35.966335 env[1331]: time="2024-02-08T23:21:35.966309461Z" level=info msg="RemoveContainer for \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\"" Feb 8 23:21:35.972572 env[1331]: time="2024-02-08T23:21:35.972538710Z" level=info msg="RemoveContainer for \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\" returns successfully" Feb 8 23:21:35.972719 kubelet[2432]: I0208 23:21:35.972698 2432 scope.go:115] "RemoveContainer" containerID="845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a" Feb 8 23:21:35.973605 env[1331]: time="2024-02-08T23:21:35.973581018Z" level=info msg="RemoveContainer for \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\"" Feb 8 23:21:35.979498 env[1331]: time="2024-02-08T23:21:35.979464664Z" level=info msg="RemoveContainer for \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\" returns successfully" Feb 8 23:21:35.979622 kubelet[2432]: I0208 23:21:35.979601 2432 scope.go:115] "RemoveContainer" containerID="397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6" Feb 8 23:21:35.979853 env[1331]: time="2024-02-08T23:21:35.979787967Z" level=error msg="ContainerStatus for \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\": not found" Feb 8 23:21:35.980040 kubelet[2432]: E0208 23:21:35.980013 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\": not found" containerID="397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6" Feb 8 23:21:35.980120 kubelet[2432]: I0208 23:21:35.980053 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6} err="failed to get container status \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\": rpc error: code = NotFound desc = an error occurred when try to find container \"397e91256983e317e664b59ecdffa16736b33ac86aae726b0f51d96f3585b1c6\": not found" Feb 8 23:21:35.980120 kubelet[2432]: I0208 23:21:35.980068 2432 scope.go:115] "RemoveContainer" containerID="43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887" Feb 8 23:21:35.980351 env[1331]: time="2024-02-08T23:21:35.980302571Z" level=error msg="ContainerStatus for \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\": not found" Feb 8 23:21:35.980520 kubelet[2432]: E0208 23:21:35.980490 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\": not found" containerID="43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887" Feb 8 23:21:35.980593 kubelet[2432]: I0208 23:21:35.980526 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887} err="failed to get container status \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\": rpc error: code = NotFound desc = an error occurred when try to find container \"43a4e249561187280f6bdd8ba8534b0e27b4aa7121e9d5792cedd22c785a1887\": not found" Feb 8 23:21:35.980593 kubelet[2432]: I0208 23:21:35.980539 2432 scope.go:115] "RemoveContainer" containerID="28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8" Feb 8 23:21:35.980751 env[1331]: time="2024-02-08T23:21:35.980705774Z" level=error msg="ContainerStatus for \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\": not found" Feb 8 23:21:35.980863 kubelet[2432]: E0208 23:21:35.980844 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\": not found" containerID="28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8" Feb 8 23:21:35.980953 kubelet[2432]: I0208 23:21:35.980877 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8} err="failed to get container status \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\": rpc error: code = NotFound desc = an error occurred when try to find container \"28fc01c17f8a011c5e5ef0c68a2067576a03dcc8a9d2bc145dc85b7ecce04bf8\": not found" Feb 8 23:21:35.980953 kubelet[2432]: I0208 23:21:35.980890 2432 scope.go:115] "RemoveContainer" containerID="622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723" Feb 8 23:21:35.981126 env[1331]: time="2024-02-08T23:21:35.981080477Z" level=error msg="ContainerStatus for \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\": not found" Feb 8 23:21:35.981251 kubelet[2432]: E0208 23:21:35.981233 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\": not found" containerID="622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723" Feb 8 23:21:35.981322 kubelet[2432]: I0208 23:21:35.981263 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723} err="failed to get container status \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\": rpc error: code = NotFound desc = an error occurred when try to find container \"622d3ee88597da15bedd957985326e4b691616e52d13d687d60e648e478e0723\": not found" Feb 8 23:21:35.981322 kubelet[2432]: I0208 23:21:35.981289 2432 scope.go:115] "RemoveContainer" containerID="845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a" Feb 8 23:21:35.981523 env[1331]: time="2024-02-08T23:21:35.981481780Z" level=error msg="ContainerStatus for \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\": not found" Feb 8 23:21:35.981648 kubelet[2432]: E0208 23:21:35.981630 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\": not found" containerID="845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a" Feb 8 23:21:35.981720 kubelet[2432]: I0208 23:21:35.981660 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a} err="failed to get container status \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\": rpc error: code = NotFound desc = an error occurred when try to find container \"845cde3ead84b5c86b12a46035bec02b1a8ea54812cafdfbd03f68059e2d5b7a\": not found" Feb 8 23:21:35.981720 kubelet[2432]: I0208 23:21:35.981672 2432 scope.go:115] "RemoveContainer" containerID="16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd" Feb 8 23:21:35.982696 env[1331]: time="2024-02-08T23:21:35.982671090Z" level=info msg="RemoveContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\"" Feb 8 23:21:35.995160 env[1331]: time="2024-02-08T23:21:35.995122388Z" level=info msg="RemoveContainer for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" returns successfully" Feb 8 23:21:35.995367 kubelet[2432]: I0208 23:21:35.995346 2432 scope.go:115] "RemoveContainer" containerID="16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd" Feb 8 23:21:35.995585 env[1331]: time="2024-02-08T23:21:35.995534591Z" level=error msg="ContainerStatus for \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\": not found" Feb 8 23:21:35.995710 kubelet[2432]: E0208 23:21:35.995692 2432 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\": not found" containerID="16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd" Feb 8 23:21:35.995828 kubelet[2432]: I0208 23:21:35.995725 2432 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd} err="failed to get container status \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\": rpc error: code = NotFound desc = an error occurred when try to find container \"16db91f27e5d2b3af1f27f664b1e60e256ad1dab754b21546fab5f6f2d096dcd\": not found" Feb 8 23:21:36.044344 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19-rootfs.mount: Deactivated successfully. Feb 8 23:21:36.044497 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-f1bc65a9257b59e4396721a48c0935ed5267f744d246f3a219991efa1ef60a19-shm.mount: Deactivated successfully. Feb 8 23:21:36.044602 systemd[1]: var-lib-kubelet-pods-5b4d5374\x2d490b\x2d4a7f\x2da817\x2dcec3120c47cf-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlvn64.mount: Deactivated successfully. Feb 8 23:21:36.044705 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881-rootfs.mount: Deactivated successfully. Feb 8 23:21:36.044791 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-6bf1c9d40adc10bae413926d0919d3622fc9203f239123aea763d4512227a881-shm.mount: Deactivated successfully. Feb 8 23:21:36.044890 systemd[1]: var-lib-kubelet-pods-eabe1d2a\x2dbed6\x2d49db\x2dbd12\x2dea72995180a2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djgbfb.mount: Deactivated successfully. Feb 8 23:21:36.045022 systemd[1]: var-lib-kubelet-pods-eabe1d2a\x2dbed6\x2d49db\x2dbd12\x2dea72995180a2-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:21:36.045121 systemd[1]: var-lib-kubelet-pods-eabe1d2a\x2dbed6\x2d49db\x2dbd12\x2dea72995180a2-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:36.259445 kubelet[2432]: I0208 23:21:36.259317 2432 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=5b4d5374-490b-4a7f-a817-cec3120c47cf path="/var/lib/kubelet/pods/5b4d5374-490b-4a7f-a817-cec3120c47cf/volumes" Feb 8 23:21:36.259933 kubelet[2432]: I0208 23:21:36.259907 2432 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=eabe1d2a-bed6-49db-bd12-ea72995180a2 path="/var/lib/kubelet/pods/eabe1d2a-bed6-49db-bd12-ea72995180a2/volumes" Feb 8 23:21:37.093208 sshd[3995]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:37.096923 systemd[1]: sshd@20-10.200.8.40:22-10.200.12.6:60778.service: Deactivated successfully. Feb 8 23:21:37.098075 systemd[1]: session-23.scope: Deactivated successfully. Feb 8 23:21:37.098982 systemd-logind[1316]: Session 23 logged out. Waiting for processes to exit. Feb 8 23:21:37.100006 systemd-logind[1316]: Removed session 23. Feb 8 23:21:37.199339 systemd[1]: Started sshd@21-10.200.8.40:22-10.200.12.6:47000.service. Feb 8 23:21:37.368478 kubelet[2432]: E0208 23:21:37.368352 2432 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:21:37.709811 kubelet[2432]: I0208 23:21:37.708533 2432 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-56a09d6613" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-08 23:21:37.708454213 +0000 UTC m=+305.622931272 LastTransitionTime:2024-02-08 23:21:37.708454213 +0000 UTC m=+305.622931272 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 8 23:21:37.824854 sshd[4160]: Accepted publickey for core from 10.200.12.6 port 47000 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:37.826256 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:37.831046 systemd-logind[1316]: New session 24 of user core. Feb 8 23:21:37.831536 systemd[1]: Started session-24.scope. Feb 8 23:21:38.681312 kubelet[2432]: I0208 23:21:38.681272 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:38.681869 kubelet[2432]: E0208 23:21:38.681848 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="clean-cilium-state" Feb 8 23:21:38.682000 kubelet[2432]: E0208 23:21:38.681988 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="apply-sysctl-overwrites" Feb 8 23:21:38.682098 kubelet[2432]: E0208 23:21:38.682087 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="mount-bpf-fs" Feb 8 23:21:38.682179 kubelet[2432]: E0208 23:21:38.682168 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="5b4d5374-490b-4a7f-a817-cec3120c47cf" containerName="cilium-operator" Feb 8 23:21:38.682260 kubelet[2432]: E0208 23:21:38.682250 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="mount-cgroup" Feb 8 23:21:38.682335 kubelet[2432]: E0208 23:21:38.682325 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="cilium-agent" Feb 8 23:21:38.682448 kubelet[2432]: I0208 23:21:38.682434 2432 memory_manager.go:346] "RemoveStaleState removing state" podUID="eabe1d2a-bed6-49db-bd12-ea72995180a2" containerName="cilium-agent" Feb 8 23:21:38.682542 kubelet[2432]: I0208 23:21:38.682532 2432 memory_manager.go:346] "RemoveStaleState removing state" podUID="5b4d5374-490b-4a7f-a817-cec3120c47cf" containerName="cilium-operator" Feb 8 23:21:38.689325 systemd[1]: Created slice kubepods-burstable-pod80b1b5ee_eea8_4c2c_a38b_8b42cd8a9408.slice. Feb 8 23:21:38.711001 kubelet[2432]: I0208 23:21:38.710973 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-clustermesh-secrets\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711289 kubelet[2432]: I0208 23:21:38.711273 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-kernel\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711421 kubelet[2432]: I0208 23:21:38.711409 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-bpf-maps\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711565 kubelet[2432]: I0208 23:21:38.711551 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-cgroup\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711694 kubelet[2432]: I0208 23:21:38.711682 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-ipsec-secrets\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711798 kubelet[2432]: I0208 23:21:38.711787 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cni-path\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.711902 kubelet[2432]: I0208 23:21:38.711890 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hubble-tls\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712032 kubelet[2432]: I0208 23:21:38.712019 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-config-path\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712163 kubelet[2432]: I0208 23:21:38.712150 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-lib-modules\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712278 kubelet[2432]: I0208 23:21:38.712264 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-xtables-lock\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712386 kubelet[2432]: I0208 23:21:38.712375 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-etc-cni-netd\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712494 kubelet[2432]: I0208 23:21:38.712483 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-net\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712610 kubelet[2432]: I0208 23:21:38.712600 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-run\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712726 kubelet[2432]: I0208 23:21:38.712715 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8xbj\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-kube-api-access-h8xbj\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.712843 kubelet[2432]: I0208 23:21:38.712831 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hostproc\") pod \"cilium-jf7mc\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " pod="kube-system/cilium-jf7mc" Feb 8 23:21:38.800873 sshd[4160]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:38.805794 systemd[1]: sshd@21-10.200.8.40:22-10.200.12.6:47000.service: Deactivated successfully. Feb 8 23:21:38.806789 systemd[1]: session-24.scope: Deactivated successfully. Feb 8 23:21:38.808190 systemd-logind[1316]: Session 24 logged out. Waiting for processes to exit. Feb 8 23:21:38.809242 systemd-logind[1316]: Removed session 24. Feb 8 23:21:38.907735 systemd[1]: Started sshd@22-10.200.8.40:22-10.200.12.6:47006.service. Feb 8 23:21:38.994199 env[1331]: time="2024-02-08T23:21:38.992786636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf7mc,Uid:80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:39.033058 env[1331]: time="2024-02-08T23:21:39.032989349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:39.033058 env[1331]: time="2024-02-08T23:21:39.033028849Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:39.033298 env[1331]: time="2024-02-08T23:21:39.033049949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:39.033298 env[1331]: time="2024-02-08T23:21:39.033179250Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8 pid=4185 runtime=io.containerd.runc.v2 Feb 8 23:21:39.046503 systemd[1]: Started cri-containerd-209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8.scope. Feb 8 23:21:39.072064 env[1331]: time="2024-02-08T23:21:39.072024552Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-jf7mc,Uid:80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408,Namespace:kube-system,Attempt:0,} returns sandbox id \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\"" Feb 8 23:21:39.076282 env[1331]: time="2024-02-08T23:21:39.076244785Z" level=info msg="CreateContainer within sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:21:39.104325 env[1331]: time="2024-02-08T23:21:39.104291103Z" level=info msg="CreateContainer within sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\"" Feb 8 23:21:39.107570 env[1331]: time="2024-02-08T23:21:39.107535428Z" level=info msg="StartContainer for \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\"" Feb 8 23:21:39.129490 systemd[1]: Started cri-containerd-c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e.scope. Feb 8 23:21:39.138162 systemd[1]: cri-containerd-c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e.scope: Deactivated successfully. Feb 8 23:21:39.138487 systemd[1]: Stopped cri-containerd-c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e.scope. Feb 8 23:21:39.213642 env[1331]: time="2024-02-08T23:21:39.213559753Z" level=info msg="shim disconnected" id=c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e Feb 8 23:21:39.214077 env[1331]: time="2024-02-08T23:21:39.214045957Z" level=warning msg="cleaning up after shim disconnected" id=c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e namespace=k8s.io Feb 8 23:21:39.214222 env[1331]: time="2024-02-08T23:21:39.214202358Z" level=info msg="cleaning up dead shim" Feb 8 23:21:39.223923 env[1331]: time="2024-02-08T23:21:39.223867533Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:39Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4246 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:21:39Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:21:39.224311 env[1331]: time="2024-02-08T23:21:39.224161935Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Feb 8 23:21:39.224529 env[1331]: time="2024-02-08T23:21:39.224487438Z" level=error msg="Failed to pipe stdout of container \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\"" error="reading from a closed fifo" Feb 8 23:21:39.224891 env[1331]: time="2024-02-08T23:21:39.224845141Z" level=error msg="Failed to pipe stderr of container \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\"" error="reading from a closed fifo" Feb 8 23:21:39.237056 env[1331]: time="2024-02-08T23:21:39.236999635Z" level=error msg="StartContainer for \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:21:39.237263 kubelet[2432]: E0208 23:21:39.237238 2432 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e" Feb 8 23:21:39.237394 kubelet[2432]: E0208 23:21:39.237375 2432 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:21:39.237394 kubelet[2432]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:21:39.237394 kubelet[2432]: rm /hostbin/cilium-mount Feb 8 23:21:39.237540 kubelet[2432]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h8xbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jf7mc_kube-system(80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:21:39.237540 kubelet[2432]: E0208 23:21:39.237431 2432 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jf7mc" podUID=80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408 Feb 8 23:21:39.542280 sshd[4175]: Accepted publickey for core from 10.200.12.6 port 47006 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:39.543049 sshd[4175]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:39.547897 systemd-logind[1316]: New session 25 of user core. Feb 8 23:21:39.548519 systemd[1]: Started session-25.scope. Feb 8 23:21:39.947320 env[1331]: time="2024-02-08T23:21:39.947270858Z" level=info msg="CreateContainer within sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 8 23:21:39.993882 env[1331]: time="2024-02-08T23:21:39.993834920Z" level=info msg="CreateContainer within sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\"" Feb 8 23:21:39.994601 env[1331]: time="2024-02-08T23:21:39.994563026Z" level=info msg="StartContainer for \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\"" Feb 8 23:21:40.020853 systemd[1]: Started cri-containerd-0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998.scope. Feb 8 23:21:40.031965 systemd[1]: cri-containerd-0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998.scope: Deactivated successfully. Feb 8 23:21:40.050075 env[1331]: time="2024-02-08T23:21:40.050016156Z" level=info msg="shim disconnected" id=0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998 Feb 8 23:21:40.050075 env[1331]: time="2024-02-08T23:21:40.050075257Z" level=warning msg="cleaning up after shim disconnected" id=0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998 namespace=k8s.io Feb 8 23:21:40.050349 env[1331]: time="2024-02-08T23:21:40.050087757Z" level=info msg="cleaning up dead shim" Feb 8 23:21:40.058227 env[1331]: time="2024-02-08T23:21:40.058145319Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:40Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4289 runtime=io.containerd.runc.v2\ntime=\"2024-02-08T23:21:40Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 8 23:21:40.058491 env[1331]: time="2024-02-08T23:21:40.058429422Z" level=error msg="copy shim log" error="read /proc/self/fd/40: file already closed" Feb 8 23:21:40.058752 env[1331]: time="2024-02-08T23:21:40.058693824Z" level=error msg="Failed to pipe stderr of container \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\"" error="reading from a closed fifo" Feb 8 23:21:40.058902 env[1331]: time="2024-02-08T23:21:40.058723824Z" level=error msg="Failed to pipe stdout of container \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\"" error="reading from a closed fifo" Feb 8 23:21:40.062548 env[1331]: time="2024-02-08T23:21:40.062506553Z" level=error msg="StartContainer for \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 8 23:21:40.062762 kubelet[2432]: E0208 23:21:40.062739 2432 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998" Feb 8 23:21:40.063125 kubelet[2432]: E0208 23:21:40.062867 2432 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 8 23:21:40.063125 kubelet[2432]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 8 23:21:40.063125 kubelet[2432]: rm /hostbin/cilium-mount Feb 8 23:21:40.063125 kubelet[2432]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-h8xbj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-jf7mc_kube-system(80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 8 23:21:40.063125 kubelet[2432]: E0208 23:21:40.062915 2432 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-jf7mc" podUID=80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408 Feb 8 23:21:40.064870 sshd[4175]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:40.068984 systemd[1]: sshd@22-10.200.8.40:22-10.200.12.6:47006.service: Deactivated successfully. Feb 8 23:21:40.070102 systemd[1]: session-25.scope: Deactivated successfully. Feb 8 23:21:40.070146 systemd-logind[1316]: Session 25 logged out. Waiting for processes to exit. Feb 8 23:21:40.071228 systemd-logind[1316]: Removed session 25. Feb 8 23:21:40.169051 systemd[1]: Started sshd@23-10.200.8.40:22-10.200.12.6:47010.service. Feb 8 23:21:40.790635 sshd[4303]: Accepted publickey for core from 10.200.12.6 port 47010 ssh2: RSA SHA256:psGCIvVnZRuLQEqgvEvjwWELTdsMBZYKF2FBCpe1wIc Feb 8 23:21:40.792294 sshd[4303]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:21:40.798302 systemd-logind[1316]: New session 26 of user core. Feb 8 23:21:40.798794 systemd[1]: Started session-26.scope. Feb 8 23:21:40.819734 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998-rootfs.mount: Deactivated successfully. Feb 8 23:21:40.940169 kubelet[2432]: I0208 23:21:40.940132 2432 scope.go:115] "RemoveContainer" containerID="c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e" Feb 8 23:21:40.940798 env[1331]: time="2024-02-08T23:21:40.940748063Z" level=info msg="StopPodSandbox for \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\"" Feb 8 23:21:40.940985 env[1331]: time="2024-02-08T23:21:40.940831564Z" level=info msg="Container to stop \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:40.940985 env[1331]: time="2024-02-08T23:21:40.940855764Z" level=info msg="Container to stop \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:21:40.943205 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8-shm.mount: Deactivated successfully. Feb 8 23:21:40.949198 env[1331]: time="2024-02-08T23:21:40.949159528Z" level=info msg="RemoveContainer for \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\"" Feb 8 23:21:40.958894 systemd[1]: cri-containerd-209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8.scope: Deactivated successfully. Feb 8 23:21:40.966507 env[1331]: time="2024-02-08T23:21:40.966467062Z" level=info msg="RemoveContainer for \"c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e\" returns successfully" Feb 8 23:21:40.995644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8-rootfs.mount: Deactivated successfully. Feb 8 23:21:41.010346 env[1331]: time="2024-02-08T23:21:41.010297402Z" level=info msg="shim disconnected" id=209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8 Feb 8 23:21:41.010821 env[1331]: time="2024-02-08T23:21:41.010794406Z" level=warning msg="cleaning up after shim disconnected" id=209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8 namespace=k8s.io Feb 8 23:21:41.010962 env[1331]: time="2024-02-08T23:21:41.010931807Z" level=info msg="cleaning up dead shim" Feb 8 23:21:41.019115 env[1331]: time="2024-02-08T23:21:41.019082770Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:41Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4325 runtime=io.containerd.runc.v2\n" Feb 8 23:21:41.019389 env[1331]: time="2024-02-08T23:21:41.019360272Z" level=info msg="TearDown network for sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" successfully" Feb 8 23:21:41.019466 env[1331]: time="2024-02-08T23:21:41.019391172Z" level=info msg="StopPodSandbox for \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" returns successfully" Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127281 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-bpf-maps\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127353 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-xtables-lock\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127402 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-clustermesh-secrets\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127455 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-kernel\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127494 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-config-path\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127544 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-lib-modules\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127576 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-etc-cni-netd\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127628 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hostproc\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.127764 kubelet[2432]: I0208 23:21:41.127715 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h8xbj\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-kube-api-access-h8xbj\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.129800 kubelet[2432]: I0208 23:21:41.128800 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-cgroup\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.129800 kubelet[2432]: I0208 23:21:41.129003 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-net\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.129800 kubelet[2432]: I0208 23:21:41.129179 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hubble-tls\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.129800 kubelet[2432]: I0208 23:21:41.129325 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.129800 kubelet[2432]: I0208 23:21:41.129366 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.130627 kubelet[2432]: I0208 23:21:41.130264 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-run\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.130627 kubelet[2432]: I0208 23:21:41.130346 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cni-path\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.130627 kubelet[2432]: I0208 23:21:41.130402 2432 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-ipsec-secrets\") pod \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\" (UID: \"80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408\") " Feb 8 23:21:41.130627 kubelet[2432]: I0208 23:21:41.130465 2432 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-bpf-maps\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.130627 kubelet[2432]: I0208 23:21:41.130511 2432 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-xtables-lock\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.131433 kubelet[2432]: I0208 23:21:41.131298 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.131674 kubelet[2432]: I0208 23:21:41.131536 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.131674 kubelet[2432]: I0208 23:21:41.131596 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hostproc" (OuterVolumeSpecName: "hostproc") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.132238 kubelet[2432]: I0208 23:21:41.132195 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.133070 kubelet[2432]: I0208 23:21:41.132384 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.133246 kubelet[2432]: I0208 23:21:41.132710 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.133345 kubelet[2432]: I0208 23:21:41.132733 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cni-path" (OuterVolumeSpecName: "cni-path") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.133454 kubelet[2432]: W0208 23:21:41.132907 2432 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 8 23:21:41.135711 kubelet[2432]: I0208 23:21:41.132991 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:21:41.139036 kubelet[2432]: I0208 23:21:41.138843 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:21:41.143152 systemd[1]: var-lib-kubelet-pods-80b1b5ee\x2deea8\x2d4c2c\x2da38b\x2d8b42cd8a9408-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:41.144587 kubelet[2432]: I0208 23:21:41.144563 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:41.148958 systemd[1]: var-lib-kubelet-pods-80b1b5ee\x2deea8\x2d4c2c\x2da38b\x2d8b42cd8a9408-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:21:41.150262 kubelet[2432]: I0208 23:21:41.150237 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:21:41.154507 kubelet[2432]: I0208 23:21:41.154473 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-kube-api-access-h8xbj" (OuterVolumeSpecName: "kube-api-access-h8xbj") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "kube-api-access-h8xbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:41.154880 kubelet[2432]: I0208 23:21:41.154857 2432 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" (UID: "80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:21:41.231378 kubelet[2432]: I0208 23:21:41.231343 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-cgroup\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231378 kubelet[2432]: I0208 23:21:41.231377 2432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-net\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231378 kubelet[2432]: I0208 23:21:41.231391 2432 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hubble-tls\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231408 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-run\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231421 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231435 2432 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cni-path\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231448 2432 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-clustermesh-secrets\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231460 2432 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-etc-cni-netd\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231473 2432 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231486 2432 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-cilium-config-path\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231499 2432 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-lib-modules\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231511 2432 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-hostproc\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.231625 kubelet[2432]: I0208 23:21:41.231524 2432 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-h8xbj\" (UniqueName: \"kubernetes.io/projected/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408-kube-api-access-h8xbj\") on node \"ci-3510.3.2-a-56a09d6613\" DevicePath \"\"" Feb 8 23:21:41.820403 systemd[1]: var-lib-kubelet-pods-80b1b5ee\x2deea8\x2d4c2c\x2da38b\x2d8b42cd8a9408-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh8xbj.mount: Deactivated successfully. Feb 8 23:21:41.820553 systemd[1]: var-lib-kubelet-pods-80b1b5ee\x2deea8\x2d4c2c\x2da38b\x2d8b42cd8a9408-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:21:41.943796 kubelet[2432]: I0208 23:21:41.943761 2432 scope.go:115] "RemoveContainer" containerID="0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998" Feb 8 23:21:41.945639 env[1331]: time="2024-02-08T23:21:41.945215231Z" level=info msg="RemoveContainer for \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\"" Feb 8 23:21:41.949716 systemd[1]: Removed slice kubepods-burstable-pod80b1b5ee_eea8_4c2c_a38b_8b42cd8a9408.slice. Feb 8 23:21:41.956110 env[1331]: time="2024-02-08T23:21:41.956024714Z" level=info msg="RemoveContainer for \"0c608d5801101c2decf35822de51bc2afbdf069a02e2478a11a760041dfcf998\" returns successfully" Feb 8 23:21:41.989972 kubelet[2432]: I0208 23:21:41.989926 2432 topology_manager.go:212] "Topology Admit Handler" Feb 8 23:21:41.990150 kubelet[2432]: E0208 23:21:41.990028 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" containerName="mount-cgroup" Feb 8 23:21:41.990150 kubelet[2432]: E0208 23:21:41.990044 2432 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" containerName="mount-cgroup" Feb 8 23:21:41.990150 kubelet[2432]: I0208 23:21:41.990070 2432 memory_manager.go:346] "RemoveStaleState removing state" podUID="80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" containerName="mount-cgroup" Feb 8 23:21:41.990150 kubelet[2432]: I0208 23:21:41.990078 2432 memory_manager.go:346] "RemoveStaleState removing state" podUID="80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408" containerName="mount-cgroup" Feb 8 23:21:41.996275 systemd[1]: Created slice kubepods-burstable-podd111cdcb_1d43_4d57_898a_db05d511b0b9.slice. Feb 8 23:21:42.036073 kubelet[2432]: I0208 23:21:42.036040 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-host-proc-sys-kernel\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036081 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-etc-cni-netd\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036108 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d111cdcb-1d43-4d57-898a-db05d511b0b9-clustermesh-secrets\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036131 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-cilium-cgroup\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036155 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-xtables-lock\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036178 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-cni-path\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036203 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/d111cdcb-1d43-4d57-898a-db05d511b0b9-cilium-ipsec-secrets\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036246 kubelet[2432]: I0208 23:21:42.036228 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d111cdcb-1d43-4d57-898a-db05d511b0b9-hubble-tls\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036258 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p25sn\" (UniqueName: \"kubernetes.io/projected/d111cdcb-1d43-4d57-898a-db05d511b0b9-kube-api-access-p25sn\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036285 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d111cdcb-1d43-4d57-898a-db05d511b0b9-cilium-config-path\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036313 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-host-proc-sys-net\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036342 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-lib-modules\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036371 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-bpf-maps\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036398 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-hostproc\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.036541 kubelet[2432]: I0208 23:21:42.036426 2432 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d111cdcb-1d43-4d57-898a-db05d511b0b9-cilium-run\") pod \"cilium-bq92n\" (UID: \"d111cdcb-1d43-4d57-898a-db05d511b0b9\") " pod="kube-system/cilium-bq92n" Feb 8 23:21:42.259131 kubelet[2432]: I0208 23:21:42.259092 2432 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408 path="/var/lib/kubelet/pods/80b1b5ee-eea8-4c2c-a38b-8b42cd8a9408/volumes" Feb 8 23:21:42.300776 env[1331]: time="2024-02-08T23:21:42.300723073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq92n,Uid:d111cdcb-1d43-4d57-898a-db05d511b0b9,Namespace:kube-system,Attempt:0,}" Feb 8 23:21:42.318918 kubelet[2432]: W0208 23:21:42.318871 2432 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod80b1b5ee_eea8_4c2c_a38b_8b42cd8a9408.slice/cri-containerd-c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e.scope WatchSource:0}: container "c08369e9eaaa8e3a9e2c3dc5bd87edb0b9a1af65e13969e2fa6a41f864ae065e" in namespace "k8s.io": not found Feb 8 23:21:42.336843 env[1331]: time="2024-02-08T23:21:42.336764251Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:21:42.336843 env[1331]: time="2024-02-08T23:21:42.336813852Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:21:42.337108 env[1331]: time="2024-02-08T23:21:42.336828752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:21:42.337338 env[1331]: time="2024-02-08T23:21:42.337279455Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9 pid=4358 runtime=io.containerd.runc.v2 Feb 8 23:21:42.350591 systemd[1]: Started cri-containerd-aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9.scope. Feb 8 23:21:42.369979 kubelet[2432]: E0208 23:21:42.369934 2432 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:21:42.375436 env[1331]: time="2024-02-08T23:21:42.375394049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bq92n,Uid:d111cdcb-1d43-4d57-898a-db05d511b0b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\"" Feb 8 23:21:42.378723 env[1331]: time="2024-02-08T23:21:42.378342372Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:21:42.419391 env[1331]: time="2024-02-08T23:21:42.419343788Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de\"" Feb 8 23:21:42.422131 env[1331]: time="2024-02-08T23:21:42.420096894Z" level=info msg="StartContainer for \"11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de\"" Feb 8 23:21:42.439750 systemd[1]: Started cri-containerd-11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de.scope. Feb 8 23:21:42.480893 systemd[1]: cri-containerd-11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de.scope: Deactivated successfully. Feb 8 23:21:42.481714 env[1331]: time="2024-02-08T23:21:42.481680869Z" level=info msg="StartContainer for \"11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de\" returns successfully" Feb 8 23:21:42.540903 env[1331]: time="2024-02-08T23:21:42.540760024Z" level=info msg="shim disconnected" id=11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de Feb 8 23:21:42.540903 env[1331]: time="2024-02-08T23:21:42.540819024Z" level=warning msg="cleaning up after shim disconnected" id=11e868989640d3429cc814e0d9c13b01a83b104b894cf94f6107852fb37cb4de namespace=k8s.io Feb 8 23:21:42.540903 env[1331]: time="2024-02-08T23:21:42.540831525Z" level=info msg="cleaning up dead shim" Feb 8 23:21:42.549172 env[1331]: time="2024-02-08T23:21:42.549130489Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:42Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Feb 8 23:21:42.950657 env[1331]: time="2024-02-08T23:21:42.950609484Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:21:42.983765 env[1331]: time="2024-02-08T23:21:42.983718739Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263\"" Feb 8 23:21:42.984498 env[1331]: time="2024-02-08T23:21:42.984462845Z" level=info msg="StartContainer for \"650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263\"" Feb 8 23:21:43.008689 systemd[1]: Started cri-containerd-650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263.scope. Feb 8 23:21:43.052015 systemd[1]: cri-containerd-650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263.scope: Deactivated successfully. Feb 8 23:21:43.063518 env[1331]: time="2024-02-08T23:21:43.063477053Z" level=info msg="StartContainer for \"650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263\" returns successfully" Feb 8 23:21:43.093138 env[1331]: time="2024-02-08T23:21:43.093084381Z" level=info msg="shim disconnected" id=650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263 Feb 8 23:21:43.093138 env[1331]: time="2024-02-08T23:21:43.093136581Z" level=warning msg="cleaning up after shim disconnected" id=650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263 namespace=k8s.io Feb 8 23:21:43.093434 env[1331]: time="2024-02-08T23:21:43.093148881Z" level=info msg="cleaning up dead shim" Feb 8 23:21:43.101240 env[1331]: time="2024-02-08T23:21:43.101203743Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:43Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4506 runtime=io.containerd.runc.v2\n" Feb 8 23:21:43.820832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-650a58a17968b4a2f3d3a2b6f7eca0495e2de652329dd1d8c9822fdc96aa8263-rootfs.mount: Deactivated successfully. Feb 8 23:21:43.955017 env[1331]: time="2024-02-08T23:21:43.954969308Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:21:43.989742 env[1331]: time="2024-02-08T23:21:43.989692375Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de\"" Feb 8 23:21:43.990366 env[1331]: time="2024-02-08T23:21:43.990323980Z" level=info msg="StartContainer for \"cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de\"" Feb 8 23:21:44.015535 systemd[1]: Started cri-containerd-cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de.scope. Feb 8 23:21:44.050799 systemd[1]: cri-containerd-cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de.scope: Deactivated successfully. Feb 8 23:21:44.052490 env[1331]: time="2024-02-08T23:21:44.052452156Z" level=info msg="StartContainer for \"cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de\" returns successfully" Feb 8 23:21:44.081208 env[1331]: time="2024-02-08T23:21:44.081085176Z" level=info msg="shim disconnected" id=cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de Feb 8 23:21:44.081208 env[1331]: time="2024-02-08T23:21:44.081136176Z" level=warning msg="cleaning up after shim disconnected" id=cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de namespace=k8s.io Feb 8 23:21:44.081208 env[1331]: time="2024-02-08T23:21:44.081147977Z" level=info msg="cleaning up dead shim" Feb 8 23:21:44.089217 env[1331]: time="2024-02-08T23:21:44.089181438Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:44Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4564 runtime=io.containerd.runc.v2\n" Feb 8 23:21:44.821022 systemd[1]: run-containerd-runc-k8s.io-cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de-runc.7mKDgy.mount: Deactivated successfully. Feb 8 23:21:44.821537 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cad385948078b4b49cc91a324814abfb8cadfda20fd73bcb314b267fe336e2de-rootfs.mount: Deactivated successfully. Feb 8 23:21:44.959774 env[1331]: time="2024-02-08T23:21:44.959721614Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:21:44.992695 env[1331]: time="2024-02-08T23:21:44.992650066Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1\"" Feb 8 23:21:44.993170 env[1331]: time="2024-02-08T23:21:44.993133570Z" level=info msg="StartContainer for \"744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1\"" Feb 8 23:21:45.017585 systemd[1]: Started cri-containerd-744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1.scope. Feb 8 23:21:45.042640 systemd[1]: cri-containerd-744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1.scope: Deactivated successfully. Feb 8 23:21:45.044420 env[1331]: time="2024-02-08T23:21:45.044294761Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd111cdcb_1d43_4d57_898a_db05d511b0b9.slice/cri-containerd-744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1.scope/memory.events\": no such file or directory" Feb 8 23:21:45.048960 env[1331]: time="2024-02-08T23:21:45.048903797Z" level=info msg="StartContainer for \"744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1\" returns successfully" Feb 8 23:21:45.086353 env[1331]: time="2024-02-08T23:21:45.086226682Z" level=info msg="shim disconnected" id=744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1 Feb 8 23:21:45.086353 env[1331]: time="2024-02-08T23:21:45.086275582Z" level=warning msg="cleaning up after shim disconnected" id=744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1 namespace=k8s.io Feb 8 23:21:45.086353 env[1331]: time="2024-02-08T23:21:45.086288182Z" level=info msg="cleaning up dead shim" Feb 8 23:21:45.095795 env[1331]: time="2024-02-08T23:21:45.095750955Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:21:45Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4621 runtime=io.containerd.runc.v2\n" Feb 8 23:21:45.821178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-744af6817c0e0a03e9c0733bd14ff1557e5cb433049bacf82a07b843754a15c1-rootfs.mount: Deactivated successfully. Feb 8 23:21:45.964981 env[1331]: time="2024-02-08T23:21:45.964394698Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:21:46.010460 env[1331]: time="2024-02-08T23:21:46.010407050Z" level=info msg="CreateContainer within sandbox \"aa2663fedd6cab9321c6908b84883a98074e8cc6a63e0ab82e5b34d7e2fd5ba9\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc\"" Feb 8 23:21:46.011174 env[1331]: time="2024-02-08T23:21:46.011138955Z" level=info msg="StartContainer for \"b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc\"" Feb 8 23:21:46.034419 systemd[1]: Started cri-containerd-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc.scope. Feb 8 23:21:46.072917 env[1331]: time="2024-02-08T23:21:46.072815926Z" level=info msg="StartContainer for \"b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc\" returns successfully" Feb 8 23:21:46.434984 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:21:46.821347 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.FcvlUM.mount: Deactivated successfully. Feb 8 23:21:46.986787 kubelet[2432]: I0208 23:21:46.986738 2432 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bq92n" podStartSLOduration=5.986695796 podCreationTimestamp="2024-02-08 23:21:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:21:46.986322793 +0000 UTC m=+314.900799952" watchObservedRunningTime="2024-02-08 23:21:46.986695796 +0000 UTC m=+314.901172955" Feb 8 23:21:47.327203 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.Anw9gu.mount: Deactivated successfully. Feb 8 23:21:49.078692 systemd-networkd[1484]: lxc_health: Link UP Feb 8 23:21:49.088561 systemd-networkd[1484]: lxc_health: Gained carrier Feb 8 23:21:49.089313 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:21:49.513071 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.jIGSqh.mount: Deactivated successfully. Feb 8 23:21:50.984085 systemd-networkd[1484]: lxc_health: Gained IPv6LL Feb 8 23:21:51.785862 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.SNUIeg.mount: Deactivated successfully. Feb 8 23:21:53.920375 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.GvduEF.mount: Deactivated successfully. Feb 8 23:21:56.058578 systemd[1]: run-containerd-runc-k8s.io-b3de765875146cad21c49aec5b125a1ab3b412cdd0f7dc3ae441ff1587db5ccc-runc.kO5M0t.mount: Deactivated successfully. Feb 8 23:21:56.364309 sshd[4303]: pam_unix(sshd:session): session closed for user core Feb 8 23:21:56.367460 systemd[1]: sshd@23-10.200.8.40:22-10.200.12.6:47010.service: Deactivated successfully. Feb 8 23:21:56.368650 systemd[1]: session-26.scope: Deactivated successfully. Feb 8 23:21:56.368669 systemd-logind[1316]: Session 26 logged out. Waiting for processes to exit. Feb 8 23:21:56.370029 systemd-logind[1316]: Removed session 26. Feb 8 23:22:12.065709 systemd[1]: cri-containerd-df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315.scope: Deactivated successfully. Feb 8 23:22:12.066068 systemd[1]: cri-containerd-df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315.scope: Consumed 3.704s CPU time. Feb 8 23:22:12.088932 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315-rootfs.mount: Deactivated successfully. Feb 8 23:22:12.120198 env[1331]: time="2024-02-08T23:22:12.120145708Z" level=info msg="shim disconnected" id=df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315 Feb 8 23:22:12.120198 env[1331]: time="2024-02-08T23:22:12.120198208Z" level=warning msg="cleaning up after shim disconnected" id=df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315 namespace=k8s.io Feb 8 23:22:12.120198 env[1331]: time="2024-02-08T23:22:12.120211808Z" level=info msg="cleaning up dead shim" Feb 8 23:22:12.128084 env[1331]: time="2024-02-08T23:22:12.128048465Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:22:12Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5330 runtime=io.containerd.runc.v2\n" Feb 8 23:22:13.021724 kubelet[2432]: I0208 23:22:13.021693 2432 scope.go:115] "RemoveContainer" containerID="df915fe4393f6d72c61ff0bb7f56b5982c64f2eb7ae7489da69f197bbf242315" Feb 8 23:22:13.024514 env[1331]: time="2024-02-08T23:22:13.024358917Z" level=info msg="CreateContainer within sandbox \"276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Feb 8 23:22:13.050094 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642531246.mount: Deactivated successfully. Feb 8 23:22:13.068557 env[1331]: time="2024-02-08T23:22:13.068506134Z" level=info msg="CreateContainer within sandbox \"276643fc95342ad8d73a2c1b42a564d5bcf6a0575af86ae0beba7481115685c8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"57cbcd9d5257d0bd084838797e41ab834ae3ca0d01bec441f5cbe5f26059e9e9\"" Feb 8 23:22:13.069120 env[1331]: time="2024-02-08T23:22:13.069085138Z" level=info msg="StartContainer for \"57cbcd9d5257d0bd084838797e41ab834ae3ca0d01bec441f5cbe5f26059e9e9\"" Feb 8 23:22:13.090280 systemd[1]: Started cri-containerd-57cbcd9d5257d0bd084838797e41ab834ae3ca0d01bec441f5cbe5f26059e9e9.scope. Feb 8 23:22:13.153115 env[1331]: time="2024-02-08T23:22:13.153066341Z" level=info msg="StartContainer for \"57cbcd9d5257d0bd084838797e41ab834ae3ca0d01bec441f5cbe5f26059e9e9\" returns successfully" Feb 8 23:22:15.154658 systemd[1]: cri-containerd-5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143.scope: Deactivated successfully. Feb 8 23:22:15.155004 systemd[1]: cri-containerd-5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143.scope: Consumed 1.720s CPU time. Feb 8 23:22:15.175372 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143-rootfs.mount: Deactivated successfully. Feb 8 23:22:15.193260 env[1331]: time="2024-02-08T23:22:15.193212082Z" level=info msg="shim disconnected" id=5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143 Feb 8 23:22:15.193718 env[1331]: time="2024-02-08T23:22:15.193265083Z" level=warning msg="cleaning up after shim disconnected" id=5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143 namespace=k8s.io Feb 8 23:22:15.193718 env[1331]: time="2024-02-08T23:22:15.193277483Z" level=info msg="cleaning up dead shim" Feb 8 23:22:15.201305 env[1331]: time="2024-02-08T23:22:15.201252740Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:22:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=5393 runtime=io.containerd.runc.v2\n" Feb 8 23:22:15.627577 kubelet[2432]: E0208 23:22:15.627277 2432 controller.go:193] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37884->10.200.8.21:2379: read: connection timed out" Feb 8 23:22:16.030308 kubelet[2432]: I0208 23:22:16.030055 2432 scope.go:115] "RemoveContainer" containerID="5de0e30fbef09a2d50d81750124373a34c11e7d2aaad605914896814ae0e6143" Feb 8 23:22:16.032104 env[1331]: time="2024-02-08T23:22:16.032061788Z" level=info msg="CreateContainer within sandbox \"84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Feb 8 23:22:16.056315 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2863575400.mount: Deactivated successfully. Feb 8 23:22:16.073786 env[1331]: time="2024-02-08T23:22:16.073738885Z" level=info msg="CreateContainer within sandbox \"84dc087f40384e5d9c8adddf09bd3a4411d46f56b532aab7e3e640b8958d18c6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9b1ac008f6b75fb9539348e021658940467d7a791be67a028996bd680f083c6c\"" Feb 8 23:22:16.074439 env[1331]: time="2024-02-08T23:22:16.074406090Z" level=info msg="StartContainer for \"9b1ac008f6b75fb9539348e021658940467d7a791be67a028996bd680f083c6c\"" Feb 8 23:22:16.093038 systemd[1]: Started cri-containerd-9b1ac008f6b75fb9539348e021658940467d7a791be67a028996bd680f083c6c.scope. Feb 8 23:22:16.139463 env[1331]: time="2024-02-08T23:22:16.139418655Z" level=info msg="StartContainer for \"9b1ac008f6b75fb9539348e021658940467d7a791be67a028996bd680f083c6c\" returns successfully" Feb 8 23:22:20.683018 kubelet[2432]: I0208 23:22:20.682980 2432 status_manager.go:809] "Failed to get status for pod" podUID=6ea884b496b0fce0f9bc0e39fa218239 pod="kube-system/kube-apiserver-ci-3510.3.2-a-56a09d6613" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37798->10.200.8.21:2379: read: connection timed out" Feb 8 23:22:22.090663 kubelet[2432]: E0208 23:22:22.090537 2432 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"kube-apiserver-ci-3510.3.2-a-56a09d6613.17b206a7462eb02d", GenerateName:"", Namespace:"kube-system", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-apiserver-ci-3510.3.2-a-56a09d6613", UID:"6ea884b496b0fce0f9bc0e39fa218239", APIVersion:"v1", ResourceVersion:"", FieldPath:"spec.containers{kube-apiserver}"}, Reason:"Unhealthy", Message:"Readiness probe failed: HTTP probe failed with statuscode: 500", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-56a09d6613"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 22, 4, 233609261, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 22, 4, 233609261, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.40:37698->10.200.8.21:2379: read: connection timed out' (will not retry!) Feb 8 23:22:25.627835 kubelet[2432]: E0208 23:22:25.627774 2432 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.40:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-56a09d6613?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:22:32.229489 env[1331]: time="2024-02-08T23:22:32.229431510Z" level=info msg="StopPodSandbox for \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\"" Feb 8 23:22:32.229870 env[1331]: time="2024-02-08T23:22:32.229543311Z" level=info msg="TearDown network for sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" successfully" Feb 8 23:22:32.229870 env[1331]: time="2024-02-08T23:22:32.229586411Z" level=info msg="StopPodSandbox for \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" returns successfully" Feb 8 23:22:32.230407 env[1331]: time="2024-02-08T23:22:32.230373415Z" level=info msg="RemovePodSandbox for \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\"" Feb 8 23:22:32.230552 env[1331]: time="2024-02-08T23:22:32.230406215Z" level=info msg="Forcibly stopping sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\"" Feb 8 23:22:32.230552 env[1331]: time="2024-02-08T23:22:32.230493716Z" level=info msg="TearDown network for sandbox \"209506a2d430ad41a2ae7f7ae514230cf4510109427f914cbee5756267405cc8\" successfully" Feb 8 23:22:32.242270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.242562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.242713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.251594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.251818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.274404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.274631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.274776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.283568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.283792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.301155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.301365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.310267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.310474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.319638 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.333011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.333210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.341981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.342179 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.351164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.363874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.364081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.373423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.373616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.383350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.401041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.401278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.401405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.411246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.411439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.422235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.422436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.427048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.436994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.437190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.451296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.451524 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.456454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.465863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.466079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.475906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.476117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.485402 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.485603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.494964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.500734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.500936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.510604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.510800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.520026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.520228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.530087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.530309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.539837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.540041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.549501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.549697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.560440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.560639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.570350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.576274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.576468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.586064 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.586262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.595480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.601663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.601845 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.609931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.610131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.614635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.624671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.624873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.629303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.634169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.638777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.649125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.649309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.658838 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.659045 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.669792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.693425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.693579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.693713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.693864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.694025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.694158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.698305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.702916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.707724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.716996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.727028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.727231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.727378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.736502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.736699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.746284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.746502 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.755462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.755690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.764828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.788750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.788891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.789055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.789193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.789328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.789468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.794895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.799448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.804380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.813921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.823899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.824121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.824275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.833083 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.833290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.847439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.847650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.847791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.856877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.857104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.866491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.866703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.876052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.876266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.885342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.886048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.895904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.896118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.905261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.905470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.914837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.915054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.920814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.925495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.934840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.935136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.944360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.944544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.954157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.954368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.963929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.964140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.968651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.973451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:32.983061 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.006863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.007029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.007181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.007315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.007431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.007562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.016169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.016399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.021308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.031612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.037086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.037305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.041995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.046881 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.051522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.061347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.061571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.066078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.072597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.080494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.080740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.089986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.090195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.099625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.099837 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.109863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.110101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.119165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.119382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.128671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157429 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.157828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.166281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.166536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.175958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.176196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.191295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.191559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.191702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.200777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.201085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.210682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.210957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.219938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.220164 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.229297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.229532 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.239002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.239240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.248555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.248776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.259359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.259597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.270336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.270585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.280018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.280281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.291935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.292188 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.303380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.303658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.320317 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.320610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.320760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.330235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.330492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.340269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.340511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.349713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.349927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.358893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.359129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.368449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.368664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.378049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.378254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.392165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.392393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.392534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.402999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.403226 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.413466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.413685 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.422760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.422969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.432195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457599 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.457864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.465522 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.465727 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.475009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.475220 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.490113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.490332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.490477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.499482 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.499696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.509092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.509301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.518475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.518677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.529292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.529741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.539200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.539414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.543720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.548365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.558422 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.558614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.567920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.568132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.577407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.601322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.601483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.601618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.601825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.601977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.602119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.606085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.615819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.616043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.625008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.635060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.635272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.635411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.639759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.644373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.662828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.663074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.663216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.672325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.672538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.686732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.686978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.687126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.696091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.696315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.705989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.706202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.715535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.715764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.725235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.749902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.750079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.750218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.750375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.750526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.750697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.754133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.763690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.763912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.774506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.775060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.784857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.785071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.794933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.795172 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.804905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.805134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.814520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.814732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.824346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.824589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.829209 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.838648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.838850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.848115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.853443 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.853691 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.862849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.863067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.867472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.877050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.877254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.886719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.886925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.896933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.897193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.912583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.912802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.922092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.922291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.937409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.937613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.937760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.946815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.947033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.956614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.956828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.966067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.966282 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.975544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.975818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.980442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.990185 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.990396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:33.999734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.005266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.005462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.014764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.015002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.025950 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050741 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.050872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.059999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.060245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.070113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.070361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.080879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.081136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.090794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.091036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.100755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125421 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.125750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.129572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.140814 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.141057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.150509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.155900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.156122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.160687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.169989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.170196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.184730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.184948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.185089 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.189244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.194694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.204793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.205017 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.209381 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.214281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.219218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.229409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.229623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.238627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.238840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.251570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.251784 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.258243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.266423 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.266674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.275762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.276057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.280756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.290507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.290706 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.300011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.305246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.305446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.314615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.314885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.324181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.348872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.357474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.357703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.363714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.368453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.378533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.378747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.387818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.388028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.397505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.421384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.421539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.421677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.421810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.421929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.422112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.426057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.430682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.435520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.440559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.450283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.450494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.459572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.459775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.469186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.469415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.474090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.483166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.483372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.494887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.495223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.504921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.505139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.514380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.514592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.519977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.524957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.534833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.535046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.544907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.570223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.570412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.570583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.570743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.570905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.571078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.579404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.579632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.589011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.589210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.599281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.599478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.603987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.609177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.620365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.645874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.654624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.654843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.663842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.664055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.673628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.673909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.678988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.688724 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.688935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.698442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.698630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.703086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.707904 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.717377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.717663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.722578 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.732104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.732291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.743141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.743385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.747971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.755817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.761858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.762082 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.772060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.772260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.776567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.781333 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.790889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.814819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.815020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.815161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.815292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.815420 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.815553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.819739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.829134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.829331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.838758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.849088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.849313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.849452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.858190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.858401 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#153 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.870065 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#156 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.870284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#155 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.879710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#154 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:22:34.879952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#157 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001