Feb 9 19:00:20.040583 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:00:20.040606 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:20.040617 kernel: BIOS-provided physical RAM map: Feb 9 19:00:20.040622 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:00:20.040628 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:00:20.040636 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:00:20.040646 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:00:20.040654 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:00:20.040660 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:00:20.040667 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:00:20.040674 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:00:20.040681 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:00:20.040689 kernel: NX (Execute Disable) protection: active Feb 9 19:00:20.040694 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:00:20.040706 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:00:20.040713 kernel: random: crng init done Feb 9 19:00:20.040722 kernel: SMBIOS 3.1.0 present. Feb 9 19:00:20.040728 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:00:20.040734 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:00:20.040744 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:00:20.040750 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:00:20.040759 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:00:20.040767 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:00:20.040773 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:00:20.040779 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:00:20.040788 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:00:20.040795 kernel: tsc: Detected 2593.905 MHz processor Feb 9 19:00:20.040804 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:00:20.040812 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:00:20.040818 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:00:20.040827 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:00:20.040835 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:00:20.040845 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:00:20.040851 kernel: Using GB pages for direct mapping Feb 9 19:00:20.040860 kernel: Secure boot disabled Feb 9 19:00:20.040867 kernel: ACPI: Early table checksum verification disabled Feb 9 19:00:20.040875 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:00:20.040882 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040889 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040897 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:00:20.040910 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:00:20.040919 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040926 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040933 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040943 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040952 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040961 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040968 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:00:20.040978 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:00:20.040986 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:00:20.040995 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:00:20.041001 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:00:20.041011 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:00:20.041018 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:00:20.041029 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:00:20.041036 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:00:20.041046 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:00:20.041053 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:00:20.041063 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:00:20.041069 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:00:20.041078 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:00:20.041086 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:00:20.041096 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:00:20.041104 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:00:20.041113 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:00:20.041121 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:00:20.041131 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:00:20.041138 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:00:20.041146 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:00:20.041155 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:00:20.041164 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:00:20.041171 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:00:20.041181 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:00:20.041190 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:00:20.041198 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:00:20.041206 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:00:20.041213 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:00:20.041223 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:00:20.041230 kernel: Zone ranges: Feb 9 19:00:20.041239 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:00:20.041246 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:00:20.041256 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:20.041265 kernel: Movable zone start for each node Feb 9 19:00:20.041275 kernel: Early memory node ranges Feb 9 19:00:20.041282 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:00:20.041290 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:00:20.041298 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:00:20.041308 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:00:20.041315 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:00:20.041323 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:00:20.041334 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:00:20.041343 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:00:20.041351 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:00:20.041358 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:00:20.041367 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:00:20.041376 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:00:20.041384 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:00:20.041391 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:00:20.041401 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:00:20.041414 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:00:20.041421 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:00:20.041428 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:00:20.041437 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:00:20.041445 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:00:20.041467 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:00:20.041475 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:00:20.041484 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:00:20.041490 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:00:20.041500 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:00:20.041507 kernel: Policy zone: Normal Feb 9 19:00:20.041514 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:20.041521 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:00:20.041528 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:00:20.041536 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:00:20.041543 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:00:20.041550 kernel: Memory: 8081144K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306056K reserved, 0K cma-reserved) Feb 9 19:00:20.041561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:00:20.041569 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:00:20.041584 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:00:20.041595 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:00:20.041605 kernel: rcu: RCU event tracing is enabled. Feb 9 19:00:20.041612 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:00:20.041622 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:00:20.041631 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:00:20.041640 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:00:20.041647 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:00:20.041657 kernel: Using NULL legacy PIC Feb 9 19:00:20.041671 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:00:20.041678 kernel: Console: colour dummy device 80x25 Feb 9 19:00:20.041687 kernel: printk: console [tty1] enabled Feb 9 19:00:20.041696 kernel: printk: console [ttyS0] enabled Feb 9 19:00:20.041706 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:00:20.041715 kernel: ACPI: Core revision 20210730 Feb 9 19:00:20.041725 kernel: Failed to register legacy timer interrupt Feb 9 19:00:20.041733 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:00:20.041743 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:00:20.041750 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 9 19:00:20.041759 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:00:20.041768 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:00:20.041778 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:00:20.041787 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:00:20.041794 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:00:20.041805 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:00:20.041815 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:00:20.041823 kernel: RETBleed: Vulnerable Feb 9 19:00:20.041830 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:00:20.041840 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:20.041849 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:00:20.041858 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:00:20.041865 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:00:20.041875 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:00:20.041884 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:00:20.041894 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:00:20.041902 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:00:20.041912 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:00:20.041921 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:00:20.041929 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:00:20.041936 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:00:20.041946 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:00:20.041955 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:00:20.041964 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:00:20.041971 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:00:20.041981 kernel: LSM: Security Framework initializing Feb 9 19:00:20.041988 kernel: SELinux: Initializing. Feb 9 19:00:20.042000 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:20.042007 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:00:20.042014 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:00:20.042023 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:00:20.042031 kernel: signal: max sigframe size: 3632 Feb 9 19:00:20.042041 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:00:20.042050 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:00:20.042058 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:00:20.042068 kernel: x86: Booting SMP configuration: Feb 9 19:00:20.042075 kernel: .... node #0, CPUs: #1 Feb 9 19:00:20.042087 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:00:20.042095 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:00:20.042105 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:00:20.042114 kernel: smpboot: Max logical packages: 1 Feb 9 19:00:20.042123 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:00:20.042130 kernel: devtmpfs: initialized Feb 9 19:00:20.042140 kernel: x86/mm: Memory block size: 128MB Feb 9 19:00:20.042149 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:00:20.042160 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:00:20.042168 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:00:20.042178 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:00:20.042188 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:00:20.042195 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:00:20.042204 kernel: audit: type=2000 audit(1707505219.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:00:20.042213 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:00:20.042223 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:00:20.042230 kernel: cpuidle: using governor menu Feb 9 19:00:20.042242 kernel: ACPI: bus type PCI registered Feb 9 19:00:20.042250 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:00:20.042260 kernel: dca service started, version 1.12.1 Feb 9 19:00:20.042267 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:00:20.042278 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:00:20.042285 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:00:20.042295 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:00:20.042304 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:00:20.042313 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:00:20.042323 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:00:20.042333 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:00:20.042340 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:00:20.042350 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:00:20.042358 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:00:20.042368 kernel: ACPI: Interpreter enabled Feb 9 19:00:20.042375 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:00:20.042386 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:00:20.042394 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:00:20.042405 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:00:20.042412 kernel: iommu: Default domain type: Translated Feb 9 19:00:20.042422 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:00:20.042430 kernel: vgaarb: loaded Feb 9 19:00:20.042440 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:00:20.042448 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:00:20.049188 kernel: PTP clock support registered Feb 9 19:00:20.049201 kernel: Registered efivars operations Feb 9 19:00:20.049212 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:00:20.049224 kernel: PCI: System does not support PCI Feb 9 19:00:20.049239 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:00:20.049250 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:00:20.049261 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:00:20.049273 kernel: pnp: PnP ACPI init Feb 9 19:00:20.049285 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:00:20.049298 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:00:20.049311 kernel: NET: Registered PF_INET protocol family Feb 9 19:00:20.049323 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:00:20.049337 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:00:20.049349 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:00:20.049362 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:00:20.049374 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:00:20.049386 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:00:20.049398 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:20.049410 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:00:20.049421 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:00:20.049433 kernel: NET: Registered PF_XDP protocol family Feb 9 19:00:20.049447 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:00:20.049485 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:00:20.049497 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:00:20.049509 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:00:20.049521 kernel: Initialise system trusted keyrings Feb 9 19:00:20.049534 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:00:20.049548 kernel: Key type asymmetric registered Feb 9 19:00:20.049560 kernel: Asymmetric key parser 'x509' registered Feb 9 19:00:20.049573 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:00:20.049590 kernel: io scheduler mq-deadline registered Feb 9 19:00:20.049603 kernel: io scheduler kyber registered Feb 9 19:00:20.049617 kernel: io scheduler bfq registered Feb 9 19:00:20.049630 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:00:20.049644 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:00:20.049658 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:00:20.049671 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:00:20.049684 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:00:20.049847 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:00:20.049964 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:00:19 UTC (1707505219) Feb 9 19:00:20.050072 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:00:20.050089 kernel: fail to initialize ptp_kvm Feb 9 19:00:20.050104 kernel: intel_pstate: CPU model not supported Feb 9 19:00:20.050117 kernel: efifb: probing for efifb Feb 9 19:00:20.050131 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:00:20.050144 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:00:20.050157 kernel: efifb: scrolling: redraw Feb 9 19:00:20.050173 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:00:20.050187 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:00:20.050200 kernel: fb0: EFI VGA frame buffer device Feb 9 19:00:20.050214 kernel: pstore: Registered efi as persistent store backend Feb 9 19:00:20.050227 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:00:20.050241 kernel: Segment Routing with IPv6 Feb 9 19:00:20.050255 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:00:20.050268 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:00:20.050282 kernel: Key type dns_resolver registered Feb 9 19:00:20.050297 kernel: IPI shorthand broadcast: enabled Feb 9 19:00:20.050311 kernel: sched_clock: Marking stable (718372700, 19523300)->(912801800, -174905800) Feb 9 19:00:20.050324 kernel: registered taskstats version 1 Feb 9 19:00:20.050338 kernel: Loading compiled-in X.509 certificates Feb 9 19:00:20.050351 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:00:20.050365 kernel: Key type .fscrypt registered Feb 9 19:00:20.050377 kernel: Key type fscrypt-provisioning registered Feb 9 19:00:20.050390 kernel: pstore: Using crash dump compression: deflate Feb 9 19:00:20.050407 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:00:20.050420 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:00:20.050434 kernel: ima: No architecture policies found Feb 9 19:00:20.050447 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:00:20.050488 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:00:20.050499 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:00:20.050509 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:00:20.050519 kernel: Run /init as init process Feb 9 19:00:20.050527 kernel: with arguments: Feb 9 19:00:20.050537 kernel: /init Feb 9 19:00:20.050551 kernel: with environment: Feb 9 19:00:20.050558 kernel: HOME=/ Feb 9 19:00:20.050568 kernel: TERM=linux Feb 9 19:00:20.050576 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:00:20.050589 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:00:20.050602 systemd[1]: Detected virtualization microsoft. Feb 9 19:00:20.050611 systemd[1]: Detected architecture x86-64. Feb 9 19:00:20.050622 systemd[1]: Running in initrd. Feb 9 19:00:20.050633 systemd[1]: No hostname configured, using default hostname. Feb 9 19:00:20.050641 systemd[1]: Hostname set to . Feb 9 19:00:20.050649 systemd[1]: Initializing machine ID from random generator. Feb 9 19:00:20.050660 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:00:20.050669 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:00:20.050678 systemd[1]: Reached target cryptsetup.target. Feb 9 19:00:20.050686 systemd[1]: Reached target paths.target. Feb 9 19:00:20.050696 systemd[1]: Reached target slices.target. Feb 9 19:00:20.050708 systemd[1]: Reached target swap.target. Feb 9 19:00:20.050716 systemd[1]: Reached target timers.target. Feb 9 19:00:20.050726 systemd[1]: Listening on iscsid.socket. Feb 9 19:00:20.050735 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:00:20.050745 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:00:20.050753 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:00:20.050763 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:00:20.050775 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:00:20.050785 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:00:20.050796 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:00:20.050805 systemd[1]: Reached target sockets.target. Feb 9 19:00:20.050814 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:00:20.050822 systemd[1]: Finished network-cleanup.service. Feb 9 19:00:20.050832 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:00:20.050843 systemd[1]: Starting systemd-journald.service... Feb 9 19:00:20.050850 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:00:20.050863 systemd[1]: Starting systemd-resolved.service... Feb 9 19:00:20.050872 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:00:20.050882 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:00:20.050889 kernel: audit: type=1130 audit(1707505220.038:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.050900 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:00:20.050913 systemd-journald[183]: Journal started Feb 9 19:00:20.050965 systemd-journald[183]: Runtime Journal (/run/log/journal/90b8b68b13b54241ba5a48a48bd02ebf) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:00:20.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.019824 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:00:20.064000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.078431 systemd[1]: Started systemd-journald.service. Feb 9 19:00:20.078471 kernel: audit: type=1130 audit(1707505220.064:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.095780 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:00:20.090796 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:00:20.094178 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:00:20.138859 kernel: Bridge firewalling registered Feb 9 19:00:20.138889 kernel: audit: type=1130 audit(1707505220.090:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.138906 kernel: audit: type=1130 audit(1707505220.092:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.100541 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:00:20.113370 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:00:20.146155 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:00:20.150775 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:00:20.150787 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:00:20.150828 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:00:20.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.170420 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:00:20.185340 kernel: SCSI subsystem initialized Feb 9 19:00:20.185363 kernel: audit: type=1130 audit(1707505220.145:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.172096 systemd[1]: Started systemd-resolved.service. Feb 9 19:00:20.184671 systemd[1]: Reached target nss-lookup.target. Feb 9 19:00:20.183000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.204306 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:00:20.212043 kernel: audit: type=1130 audit(1707505220.183:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.208510 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:00:20.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.222204 dracut-cmdline[201]: dracut-dracut-053 Feb 9 19:00:20.232368 kernel: audit: type=1130 audit(1707505220.206:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.232395 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:00:20.263368 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:00:20.263422 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:00:20.269480 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:00:20.272602 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:00:20.274367 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:00:20.292765 kernel: audit: type=1130 audit(1707505220.279:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.291262 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:00:20.303000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.301612 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:00:20.316223 kernel: audit: type=1130 audit(1707505220.303:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.316258 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:00:20.332474 kernel: iscsi: registered transport (tcp) Feb 9 19:00:20.358909 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:00:20.358979 kernel: QLogic iSCSI HBA Driver Feb 9 19:00:20.387906 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:00:20.390000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.391230 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:00:20.443481 kernel: raid6: avx512x4 gen() 18462 MB/s Feb 9 19:00:20.463471 kernel: raid6: avx512x4 xor() 8748 MB/s Feb 9 19:00:20.483467 kernel: raid6: avx512x2 gen() 18518 MB/s Feb 9 19:00:20.503472 kernel: raid6: avx512x2 xor() 29668 MB/s Feb 9 19:00:20.522481 kernel: raid6: avx512x1 gen() 18607 MB/s Feb 9 19:00:20.541486 kernel: raid6: avx512x1 xor() 26692 MB/s Feb 9 19:00:20.561466 kernel: raid6: avx2x4 gen() 18522 MB/s Feb 9 19:00:20.581467 kernel: raid6: avx2x4 xor() 7959 MB/s Feb 9 19:00:20.601464 kernel: raid6: avx2x2 gen() 18619 MB/s Feb 9 19:00:20.621466 kernel: raid6: avx2x2 xor() 22162 MB/s Feb 9 19:00:20.641477 kernel: raid6: avx2x1 gen() 13993 MB/s Feb 9 19:00:20.661468 kernel: raid6: avx2x1 xor() 19328 MB/s Feb 9 19:00:20.683476 kernel: raid6: sse2x4 gen() 11613 MB/s Feb 9 19:00:20.703466 kernel: raid6: sse2x4 xor() 7105 MB/s Feb 9 19:00:20.722464 kernel: raid6: sse2x2 gen() 12912 MB/s Feb 9 19:00:20.743465 kernel: raid6: sse2x2 xor() 7463 MB/s Feb 9 19:00:20.763463 kernel: raid6: sse2x1 gen() 11535 MB/s Feb 9 19:00:20.786295 kernel: raid6: sse2x1 xor() 5903 MB/s Feb 9 19:00:20.786323 kernel: raid6: using algorithm avx2x2 gen() 18619 MB/s Feb 9 19:00:20.786335 kernel: raid6: .... xor() 22162 MB/s, rmw enabled Feb 9 19:00:20.789241 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:00:20.808479 kernel: xor: automatically using best checksumming function avx Feb 9 19:00:20.904479 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:00:20.912483 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:00:20.915000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.916000 audit: BPF prog-id=7 op=LOAD Feb 9 19:00:20.916000 audit: BPF prog-id=8 op=LOAD Feb 9 19:00:20.917081 systemd[1]: Starting systemd-udevd.service... Feb 9 19:00:20.931089 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 9 19:00:20.935719 systemd[1]: Started systemd-udevd.service. Feb 9 19:00:20.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.943619 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:00:20.959443 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 9 19:00:20.989720 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:00:20.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:20.993076 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:00:21.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:21.029847 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:00:21.078477 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:00:21.102017 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:00:21.102069 kernel: AES CTR mode by8 optimization enabled Feb 9 19:00:21.105434 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:00:21.113473 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:00:21.126716 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:00:21.141477 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:00:21.153475 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:00:21.161470 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:00:21.169472 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:00:21.175480 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:00:21.181482 kernel: scsi host1: storvsc_host_t Feb 9 19:00:21.181675 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:00:21.188916 kernel: scsi host0: storvsc_host_t Feb 9 19:00:21.193829 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:00:21.199309 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:00:21.225661 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:00:21.225881 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:00:21.226482 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:00:21.226687 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:00:21.230684 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:00:21.244134 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:00:21.244328 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:00:21.244472 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:00:21.253470 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:21.259257 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:00:21.325400 kernel: hv_netvsc 002248a0-c225-0022-48a0-c225002248a0 eth0: VF slot 1 added Feb 9 19:00:21.334827 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:00:21.342350 kernel: hv_pci 95a1f193-01cf-4bc7-982b-17b0e3acb3f8: PCI VMBus probing: Using version 0x10004 Feb 9 19:00:21.342562 kernel: hv_pci 95a1f193-01cf-4bc7-982b-17b0e3acb3f8: PCI host bridge to bus 01cf:00 Feb 9 19:00:21.350375 kernel: pci_bus 01cf:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:00:21.350576 kernel: pci_bus 01cf:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:00:21.363706 kernel: pci 01cf:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:00:21.373751 kernel: pci 01cf:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:21.389578 kernel: pci 01cf:00:02.0: enabling Extended Tags Feb 9 19:00:21.407371 kernel: pci 01cf:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 01cf:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:00:21.407642 kernel: pci_bus 01cf:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:00:21.407770 kernel: pci 01cf:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:00:21.503482 kernel: mlx5_core 01cf:00:02.0: firmware version: 14.30.1350 Feb 9 19:00:21.660702 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:00:21.680518 kernel: mlx5_core 01cf:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:00:21.701515 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (441) Feb 9 19:00:21.716482 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:00:21.853496 kernel: mlx5_core 01cf:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:00:21.853738 kernel: mlx5_core 01cf:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 9 19:00:21.865703 kernel: hv_netvsc 002248a0-c225-0022-48a0-c225002248a0 eth0: VF registering: eth1 Feb 9 19:00:21.865901 kernel: mlx5_core 01cf:00:02.0 eth1: joined to eth0 Feb 9 19:00:21.878478 kernel: mlx5_core 01cf:00:02.0 enP463s1: renamed from eth1 Feb 9 19:00:21.903696 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:00:21.910171 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:00:21.915724 systemd[1]: Starting disk-uuid.service... Feb 9 19:00:21.959588 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:00:22.934481 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:00:22.935096 disk-uuid[560]: The operation has completed successfully. Feb 9 19:00:23.008482 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:00:23.008582 systemd[1]: Finished disk-uuid.service. Feb 9 19:00:23.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.018557 systemd[1]: Starting verity-setup.service... Feb 9 19:00:23.068478 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:00:23.263821 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:00:23.269840 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:00:23.273822 systemd[1]: Finished verity-setup.service. Feb 9 19:00:23.278000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.348490 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:00:23.348486 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:00:23.350482 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:00:23.351271 systemd[1]: Starting ignition-setup.service... Feb 9 19:00:23.359634 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:00:23.372505 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:23.372531 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:23.372544 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:23.432429 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:00:23.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.436000 audit: BPF prog-id=9 op=LOAD Feb 9 19:00:23.437824 systemd[1]: Starting systemd-networkd.service... Feb 9 19:00:23.461090 systemd-networkd[833]: lo: Link UP Feb 9 19:00:23.461100 systemd-networkd[833]: lo: Gained carrier Feb 9 19:00:23.464727 systemd-networkd[833]: Enumeration completed Feb 9 19:00:23.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.464823 systemd[1]: Started systemd-networkd.service. Feb 9 19:00:23.468593 systemd[1]: Reached target network.target. Feb 9 19:00:23.471574 systemd-networkd[833]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:00:23.476282 systemd[1]: Starting iscsiuio.service... Feb 9 19:00:23.483201 systemd[1]: Started iscsiuio.service. Feb 9 19:00:23.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.488530 systemd[1]: Starting iscsid.service... Feb 9 19:00:23.493334 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:00:23.495371 iscsid[842]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:23.495371 iscsid[842]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:00:23.495371 iscsid[842]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:00:23.495371 iscsid[842]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:00:23.495371 iscsid[842]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:00:23.495371 iscsid[842]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:00:23.499000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.496877 systemd[1]: Started iscsid.service. Feb 9 19:00:23.515157 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:00:23.532000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.530003 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:00:23.533397 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:00:23.535316 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:00:23.539070 systemd[1]: Reached target remote-fs.target. Feb 9 19:00:23.541611 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:00:23.553339 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:00:23.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.571471 kernel: mlx5_core 01cf:00:02.0 enP463s1: Link up Feb 9 19:00:23.583909 systemd[1]: Finished ignition-setup.service. Feb 9 19:00:23.587000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:23.588548 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:00:23.641475 kernel: hv_netvsc 002248a0-c225-0022-48a0-c225002248a0 eth0: Data path switched to VF: enP463s1 Feb 9 19:00:23.641650 systemd-networkd[833]: enP463s1: Link UP Feb 9 19:00:23.641768 systemd-networkd[833]: eth0: Link UP Feb 9 19:00:23.651010 systemd-networkd[833]: eth0: Gained carrier Feb 9 19:00:23.652864 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:00:23.658024 systemd-networkd[833]: enP463s1: Gained carrier Feb 9 19:00:23.678565 systemd-networkd[833]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:00:25.057829 systemd-networkd[833]: eth0: Gained IPv6LL Feb 9 19:00:27.064311 ignition[857]: Ignition 2.14.0 Feb 9 19:00:27.064330 ignition[857]: Stage: fetch-offline Feb 9 19:00:27.064425 ignition[857]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:27.064518 ignition[857]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:27.168565 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:27.168747 ignition[857]: parsed url from cmdline: "" Feb 9 19:00:27.168751 ignition[857]: no config URL provided Feb 9 19:00:27.168757 ignition[857]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:27.168765 ignition[857]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:27.168770 ignition[857]: failed to fetch config: resource requires networking Feb 9 19:00:27.172593 ignition[857]: Ignition finished successfully Feb 9 19:00:27.182160 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:00:27.181000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.185530 systemd[1]: Starting ignition-fetch.service... Feb 9 19:00:27.205944 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 9 19:00:27.205975 kernel: audit: type=1130 audit(1707505227.181:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.198950 ignition[863]: Ignition 2.14.0 Feb 9 19:00:27.198957 ignition[863]: Stage: fetch Feb 9 19:00:27.199063 ignition[863]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:27.199086 ignition[863]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:27.202232 ignition[863]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:27.202417 ignition[863]: parsed url from cmdline: "" Feb 9 19:00:27.202423 ignition[863]: no config URL provided Feb 9 19:00:27.202435 ignition[863]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:00:27.202446 ignition[863]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:00:27.202519 ignition[863]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:00:27.311290 ignition[863]: GET result: OK Feb 9 19:00:27.311430 ignition[863]: config has been read from IMDS userdata Feb 9 19:00:27.311498 ignition[863]: parsing config with SHA512: 4471083de511380945e4c77d98b0a095a3bd51b07fa4454800b80f7060d54b916b93d8390a367bde6a78ceda9a7230e89b0645f8ccb748946162ac014b8e0083 Feb 9 19:00:27.342963 unknown[863]: fetched base config from "system" Feb 9 19:00:27.342977 unknown[863]: fetched base config from "system" Feb 9 19:00:27.344231 ignition[863]: fetch: fetch complete Feb 9 19:00:27.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.342985 unknown[863]: fetched user config from "azure" Feb 9 19:00:27.368485 kernel: audit: type=1130 audit(1707505227.349:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.344236 ignition[863]: fetch: fetch passed Feb 9 19:00:27.347987 systemd[1]: Finished ignition-fetch.service. Feb 9 19:00:27.344285 ignition[863]: Ignition finished successfully Feb 9 19:00:27.350903 systemd[1]: Starting ignition-kargs.service... Feb 9 19:00:27.372679 ignition[869]: Ignition 2.14.0 Feb 9 19:00:27.372685 ignition[869]: Stage: kargs Feb 9 19:00:27.372778 ignition[869]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:27.372803 ignition[869]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:27.377820 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:27.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.382501 systemd[1]: Finished ignition-kargs.service. Feb 9 19:00:27.398847 kernel: audit: type=1130 audit(1707505227.383:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.380326 ignition[869]: kargs: kargs passed Feb 9 19:00:27.396610 systemd[1]: Starting ignition-disks.service... Feb 9 19:00:27.380367 ignition[869]: Ignition finished successfully Feb 9 19:00:27.404150 ignition[875]: Ignition 2.14.0 Feb 9 19:00:27.404157 ignition[875]: Stage: disks Feb 9 19:00:27.404280 ignition[875]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:27.404303 ignition[875]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:27.407392 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:27.409935 ignition[875]: disks: disks passed Feb 9 19:00:27.412691 systemd[1]: Finished ignition-disks.service. Feb 9 19:00:27.427526 kernel: audit: type=1130 audit(1707505227.415:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.415000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.409990 ignition[875]: Ignition finished successfully Feb 9 19:00:27.416146 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:00:27.432654 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:00:27.436109 systemd[1]: Reached target local-fs.target. Feb 9 19:00:27.439989 systemd[1]: Reached target sysinit.target. Feb 9 19:00:27.443519 systemd[1]: Reached target basic.target. Feb 9 19:00:27.447375 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:00:27.508729 systemd-fsck[884]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:00:27.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.514729 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:00:27.537289 kernel: audit: type=1130 audit(1707505227.516:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:27.518410 systemd[1]: Mounting sysroot.mount... Feb 9 19:00:27.548491 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:00:27.548829 systemd[1]: Mounted sysroot.mount. Feb 9 19:00:27.551971 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:00:27.591275 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:00:27.596509 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:00:27.600476 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:00:27.600516 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:00:27.609070 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:00:27.649326 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:27.653040 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:00:27.667476 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (895) Feb 9 19:00:27.667518 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:27.676591 initrd-setup-root[900]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:00:27.689139 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:27.689166 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:27.691150 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:27.747840 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:00:27.754619 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:00:27.763082 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:00:28.291255 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:00:28.293000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.294381 systemd[1]: Starting ignition-mount.service... Feb 9 19:00:28.316998 kernel: audit: type=1130 audit(1707505228.293:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.309645 systemd[1]: Starting sysroot-boot.service... Feb 9 19:00:28.320551 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:28.320682 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:00:28.341361 ignition[961]: INFO : Ignition 2.14.0 Feb 9 19:00:28.343466 ignition[961]: INFO : Stage: mount Feb 9 19:00:28.343466 ignition[961]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:28.343466 ignition[961]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:28.353149 ignition[961]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:28.355801 ignition[961]: INFO : mount: mount passed Feb 9 19:00:28.355801 ignition[961]: INFO : Ignition finished successfully Feb 9 19:00:28.372579 kernel: audit: type=1130 audit(1707505228.359:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.356108 systemd[1]: Finished ignition-mount.service. Feb 9 19:00:28.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:28.363100 systemd[1]: Finished sysroot-boot.service. Feb 9 19:00:28.389175 kernel: audit: type=1130 audit(1707505228.374:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:29.204073 coreos-metadata[894]: Feb 09 19:00:29.203 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:00:29.225499 coreos-metadata[894]: Feb 09 19:00:29.225 INFO Fetch successful Feb 9 19:00:29.260610 coreos-metadata[894]: Feb 09 19:00:29.260 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:00:29.278710 coreos-metadata[894]: Feb 09 19:00:29.278 INFO Fetch successful Feb 9 19:00:29.312234 coreos-metadata[894]: Feb 09 19:00:29.312 INFO wrote hostname ci-3510.3.2-a-54659eee1f to /sysroot/etc/hostname Feb 9 19:00:29.317582 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:00:29.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:29.334489 kernel: audit: type=1130 audit(1707505229.321:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:00:29.333938 systemd[1]: Starting ignition-files.service... Feb 9 19:00:29.344450 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:00:29.359474 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (974) Feb 9 19:00:29.359516 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:00:29.367152 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:00:29.367177 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:00:29.375382 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:00:29.388631 ignition[993]: INFO : Ignition 2.14.0 Feb 9 19:00:29.388631 ignition[993]: INFO : Stage: files Feb 9 19:00:29.392518 ignition[993]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:00:29.392518 ignition[993]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:00:29.401025 ignition[993]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:00:29.411275 ignition[993]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:00:29.414412 ignition[993]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:00:29.414412 ignition[993]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:00:29.483027 ignition[993]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:00:29.487128 ignition[993]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:00:29.487128 ignition[993]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:00:29.487128 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:29.487128 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:29.483628 unknown[993]: wrote ssh authorized keys file for user: core Feb 9 19:00:30.158035 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:00:30.299428 ignition[993]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:00:30.306685 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:00:30.306685 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:30.306685 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Feb 9 19:00:30.563282 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:00:30.672374 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Feb 9 19:00:30.678419 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:30.678419 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:00:31.203775 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:00:31.366744 ignition[993]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:00:31.374513 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:00:31.374513 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:31.374513 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubectl: attempt #1 Feb 9 19:00:32.182864 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:00:55.079053 ignition[993]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 857e67001e74840518413593d90c6e64ad3f00d55fa44ad9a8e2ed6135392c908caff7ec19af18cbe10784b8f83afe687a0bc3bacbc9eee984cdeb9c0749cb83 Feb 9 19:00:55.086891 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubectl" Feb 9 19:00:55.086891 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:00:55.086891 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:00:55.867739 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:01:19.793997 ignition[993]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: f40216b7d14046931c58072d10c7122934eac5a23c08821371f8b08ac1779443ad11d3458a4c5dcde7cf80fc600a9fefb14b1942aa46a52330248d497ca88836 Feb 9 19:01:19.801198 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:01:19.801198 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:01:19.801198 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET https://dl.k8s.io/release/v1.27.2/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:01:20.608437 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): GET result: OK Feb 9 19:02:09.481505 ignition[993]: DEBUG : files: createFilesystemsFiles: createFiles: op(8): file matches expected sum of: a283da2224d456958b2cb99b4f6faf4457c4ed89e9e95f37d970c637f6a7f64ff4dd4d2bfce538759b2d2090933bece599a285ef8fd132eb383fece9a3941560 Feb 9 19:02:09.495673 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:02:09.495673 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:09.495673 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:02:09.495673 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:02:09.495673 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/cilium/cilium-cli/releases/download/v0.12.12/cilium-linux-amd64.tar.gz: attempt #1 Feb 9 19:02:10.122042 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Feb 9 19:02:10.849162 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/bin/cilium.tar.gz" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(c): [started] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(c): [finished] writing file "/sysroot/home/core/nginx.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(d): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(d): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:10.854106 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:10.910516 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (996) Feb 9 19:02:10.910544 kernel: audit: type=1130 audit(1707505330.910:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.910000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.878622 systemd[1]: mnt-oem805989494.mount: Deactivated successfully. Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem805989494" Feb 9 19:02:10.931368 ignition[993]: CRITICAL : files: createFilesystemsFiles: createFiles: op(10): op(11): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem805989494": device or resource busy Feb 9 19:02:10.931368 ignition[993]: ERROR : files: createFilesystemsFiles: createFiles: op(10): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem805989494", trying btrfs: device or resource busy Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem805989494" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(12): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem805989494" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [started] unmounting "/mnt/oem805989494" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): op(13): [finished] unmounting "/mnt/oem805989494" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(10): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(15): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3635179155" Feb 9 19:02:10.931368 ignition[993]: CRITICAL : files: createFilesystemsFiles: createFiles: op(14): op(15): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3635179155": device or resource busy Feb 9 19:02:10.931368 ignition[993]: ERROR : files: createFilesystemsFiles: createFiles: op(14): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem3635179155", trying btrfs: device or resource busy Feb 9 19:02:10.931368 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3635179155" Feb 9 19:02:11.019353 kernel: audit: type=1130 audit(1707505330.948:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.019378 kernel: audit: type=1131 audit(1707505330.949:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.898878 systemd[1]: mnt-oem3635179155.mount: Deactivated successfully. Feb 9 19:02:11.021536 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(16): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem3635179155" Feb 9 19:02:11.021536 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [started] unmounting "/mnt/oem3635179155" Feb 9 19:02:11.021536 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): op(17): [finished] unmounting "/mnt/oem3635179155" Feb 9 19:02:11.021536 ignition[993]: INFO : files: createFilesystemsFiles: createFiles: op(14): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(18): [started] processing unit "waagent.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(18): [finished] processing unit "waagent.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(19): [started] processing unit "nvidia.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(19): [finished] processing unit "nvidia.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1a): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1a): op(1b): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1a): op(1b): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1a): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1c): [started] processing unit "prepare-critools.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1c): op(1d): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1c): op(1d): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1c): [finished] processing unit "prepare-critools.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1e): [started] processing unit "prepare-helm.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1e): op(1f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:11.021536 ignition[993]: INFO : files: op(1e): op(1f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Feb 9 19:02:11.133363 kernel: audit: type=1130 audit(1707505331.051:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.133392 kernel: audit: type=1130 audit(1707505331.093:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.133408 kernel: audit: type=1131 audit(1707505331.104:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.051000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:10.906581 systemd[1]: Finished ignition-files.service. Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(1e): [finished] processing unit "prepare-helm.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(20): [started] setting preset to enabled for "waagent.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(20): [finished] setting preset to enabled for "waagent.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(21): [started] setting preset to enabled for "nvidia.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(21): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(22): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(22): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(23): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(23): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(24): [started] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: op(24): [finished] setting preset to enabled for "prepare-helm.service" Feb 9 19:02:11.135255 ignition[993]: INFO : files: createResultFile: createFiles: op(25): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:11.135255 ignition[993]: INFO : files: createResultFile: createFiles: op(25): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:02:11.135255 ignition[993]: INFO : files: files passed Feb 9 19:02:11.135255 ignition[993]: INFO : Ignition finished successfully Feb 9 19:02:11.184244 initrd-setup-root-after-ignition[1018]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:02:10.929626 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:02:10.938091 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:02:10.938889 systemd[1]: Starting ignition-quench.service... Feb 9 19:02:10.943552 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:02:10.943640 systemd[1]: Finished ignition-quench.service. Feb 9 19:02:11.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.048657 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:02:11.218858 kernel: audit: type=1130 audit(1707505331.204:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.051641 systemd[1]: Reached target ignition-complete.target. Feb 9 19:02:11.068533 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:02:11.087668 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:02:11.087757 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:02:11.105047 systemd[1]: Reached target initrd-fs.target. Feb 9 19:02:11.117673 systemd[1]: Reached target initrd.target. Feb 9 19:02:11.129208 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:02:11.190536 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:02:11.202378 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:02:11.243000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.221557 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:02:11.260122 kernel: audit: type=1131 audit(1707505331.243:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.229402 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:02:11.232482 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:02:11.236074 systemd[1]: Stopped target timers.target. Feb 9 19:02:11.240648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:02:11.240783 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:02:11.255995 systemd[1]: Stopped target initrd.target. Feb 9 19:02:11.260234 systemd[1]: Stopped target basic.target. Feb 9 19:02:11.263857 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:02:11.267869 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:02:11.271472 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:02:11.275643 systemd[1]: Stopped target remote-fs.target. Feb 9 19:02:11.279233 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:02:11.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.282709 systemd[1]: Stopped target sysinit.target. Feb 9 19:02:11.316069 kernel: audit: type=1131 audit(1707505331.300:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.286027 systemd[1]: Stopped target local-fs.target. Feb 9 19:02:11.290012 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:02:11.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.293805 systemd[1]: Stopped target swap.target. Feb 9 19:02:11.334796 kernel: audit: type=1131 audit(1707505331.319:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.297395 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:02:11.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.297552 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:02:11.338000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.342000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.311335 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:02:11.316149 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:02:11.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.355000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.363004 iscsid[842]: iscsid shutting down. Feb 9 19:02:11.316298 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:02:11.366690 ignition[1031]: INFO : Ignition 2.14.0 Feb 9 19:02:11.366690 ignition[1031]: INFO : Stage: umount Feb 9 19:02:11.366690 ignition[1031]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:02:11.366690 ignition[1031]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:02:11.372000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.330583 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:02:11.387000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.387711 ignition[1031]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:02:11.387711 ignition[1031]: INFO : umount: umount passed Feb 9 19:02:11.387711 ignition[1031]: INFO : Ignition finished successfully Feb 9 19:02:11.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.392000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.396000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.401000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.403000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.330755 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:02:11.405000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.334832 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:02:11.334986 systemd[1]: Stopped ignition-files.service. Feb 9 19:02:11.338865 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:02:11.339013 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:02:11.343914 systemd[1]: Stopping ignition-mount.service... Feb 9 19:02:11.346539 systemd[1]: Stopping iscsid.service... Feb 9 19:02:11.349223 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:02:11.351029 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:02:11.351205 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:02:11.353619 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:02:11.353774 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:02:11.357826 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:02:11.357956 systemd[1]: Stopped iscsid.service. Feb 9 19:02:11.380571 systemd[1]: Stopping iscsiuio.service... Feb 9 19:02:11.383537 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:02:11.383634 systemd[1]: Stopped iscsiuio.service. Feb 9 19:02:11.387842 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:02:11.387930 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:02:11.392558 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:02:11.392640 systemd[1]: Stopped ignition-mount.service. Feb 9 19:02:11.396896 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:02:11.396945 systemd[1]: Stopped ignition-disks.service. Feb 9 19:02:11.399728 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:02:11.399776 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:02:11.401601 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:02:11.401647 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:02:11.403416 systemd[1]: Stopped target network.target. Feb 9 19:02:11.405490 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:02:11.405542 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:02:11.406408 systemd[1]: Stopped target paths.target. Feb 9 19:02:11.407138 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:02:11.411501 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:02:11.416925 systemd[1]: Stopped target slices.target. Feb 9 19:02:11.472616 systemd[1]: Stopped target sockets.target. Feb 9 19:02:11.474385 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:02:11.474433 systemd[1]: Closed iscsid.socket. Feb 9 19:02:11.484000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.478438 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:02:11.478490 systemd[1]: Closed iscsiuio.socket. Feb 9 19:02:11.481005 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:02:11.481063 systemd[1]: Stopped ignition-setup.service. Feb 9 19:02:11.484691 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:11.488140 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:02:11.490723 systemd-networkd[833]: eth0: DHCPv6 lease lost Feb 9 19:02:11.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.501876 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:02:11.502400 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:02:11.502515 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:02:11.510000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.508590 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:11.508702 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:11.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.512492 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:02:11.512575 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:02:11.517000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:02:11.521268 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:02:11.521000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:02:11.521318 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:02:11.526630 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:02:11.526687 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:02:11.532000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.532982 systemd[1]: Stopping network-cleanup.service... Feb 9 19:02:11.536379 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:02:11.536441 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:02:11.540000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.540840 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:02:11.542557 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:02:11.547000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.548085 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:02:11.548135 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:02:11.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.553986 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:02:11.557597 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:02:11.559739 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:02:11.563000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.564146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:02:11.564226 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:02:11.566252 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:02:11.568549 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:02:11.572344 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:02:11.578000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.574693 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:02:11.582498 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:02:11.582557 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:02:11.589000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.589387 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:02:11.589438 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:02:11.593000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.594424 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:02:11.599924 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:02:11.604000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.602005 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:02:11.608000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.604799 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:02:11.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.614000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.614000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.604846 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:02:11.608512 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:02:11.608559 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:02:11.610880 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:02:11.610970 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:02:11.651474 kernel: hv_netvsc 002248a0-c225-0022-48a0-c225002248a0 eth0: Data path switched from VF: enP463s1 Feb 9 19:02:11.673392 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:02:11.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:11.673543 systemd[1]: Stopped network-cleanup.service. Feb 9 19:02:11.678612 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:02:11.683581 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:02:11.697375 systemd[1]: Switching root. Feb 9 19:02:11.721624 systemd-journald[183]: Journal stopped Feb 9 19:02:25.643128 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:02:25.643159 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:02:25.643179 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:02:25.643193 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:02:25.643202 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:02:25.643209 kernel: SELinux: policy capability open_perms=1 Feb 9 19:02:25.643228 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:02:25.643246 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:02:25.643262 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:02:25.643272 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:02:25.643280 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:02:25.643297 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:02:25.643315 systemd[1]: Successfully loaded SELinux policy in 322.462ms. Feb 9 19:02:25.643326 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 24.047ms. Feb 9 19:02:25.643345 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:02:25.643363 systemd[1]: Detected virtualization microsoft. Feb 9 19:02:25.643383 systemd[1]: Detected architecture x86-64. Feb 9 19:02:25.643400 systemd[1]: Detected first boot. Feb 9 19:02:25.643416 systemd[1]: Hostname set to . Feb 9 19:02:25.643426 systemd[1]: Initializing machine ID from random generator. Feb 9 19:02:25.643445 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:02:25.643463 kernel: kauditd_printk_skb: 42 callbacks suppressed Feb 9 19:02:25.643478 kernel: audit: type=1400 audit(1707505336.341:90): avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:25.643494 kernel: audit: type=1300 audit(1707505336.341:90): arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:25.643506 kernel: audit: type=1327 audit(1707505336.341:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:25.643521 kernel: audit: type=1400 audit(1707505336.348:91): avc: denied { associate } for pid=1064 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:25.643536 kernel: audit: type=1300 audit(1707505336.348:91): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:25.643545 kernel: audit: type=1307 audit(1707505336.348:91): cwd="/" Feb 9 19:02:25.643559 kernel: audit: type=1302 audit(1707505336.348:91): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.643577 kernel: audit: type=1302 audit(1707505336.348:91): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:25.643597 kernel: audit: type=1327 audit(1707505336.348:91): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:25.643615 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:02:25.643636 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:02:25.643646 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:02:25.643658 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:02:25.643674 kernel: audit: type=1334 audit(1707505345.122:92): prog-id=12 op=LOAD Feb 9 19:02:25.643692 kernel: audit: type=1334 audit(1707505345.122:93): prog-id=3 op=UNLOAD Feb 9 19:02:25.643706 kernel: audit: type=1334 audit(1707505345.132:94): prog-id=13 op=LOAD Feb 9 19:02:25.643718 kernel: audit: type=1334 audit(1707505345.137:95): prog-id=14 op=LOAD Feb 9 19:02:25.643729 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:02:25.643756 kernel: audit: type=1334 audit(1707505345.137:96): prog-id=4 op=UNLOAD Feb 9 19:02:25.643772 kernel: audit: type=1334 audit(1707505345.137:97): prog-id=5 op=UNLOAD Feb 9 19:02:25.643783 kernel: audit: type=1131 audit(1707505345.137:98): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.643793 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:02:25.643812 kernel: audit: type=1334 audit(1707505345.174:99): prog-id=12 op=UNLOAD Feb 9 19:02:25.643826 kernel: audit: type=1130 audit(1707505345.182:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.643836 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:02:25.643854 kernel: audit: type=1131 audit(1707505345.182:101): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.643868 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:02:25.643878 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:02:25.643890 systemd[1]: Created slice system-getty.slice. Feb 9 19:02:25.643908 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:02:25.643926 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:02:25.643940 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:02:25.643953 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:02:25.643972 systemd[1]: Created slice user.slice. Feb 9 19:02:25.643988 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:02:25.643998 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:02:25.644010 systemd[1]: Set up automount boot.automount. Feb 9 19:02:25.644030 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:02:25.644052 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:02:25.644063 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:02:25.644079 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:02:25.644097 systemd[1]: Reached target integritysetup.target. Feb 9 19:02:25.644117 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:02:25.644138 systemd[1]: Reached target remote-fs.target. Feb 9 19:02:25.644158 systemd[1]: Reached target slices.target. Feb 9 19:02:25.644172 systemd[1]: Reached target swap.target. Feb 9 19:02:25.644181 systemd[1]: Reached target torcx.target. Feb 9 19:02:25.644199 systemd[1]: Reached target veritysetup.target. Feb 9 19:02:25.644217 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:02:25.644228 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:02:25.644238 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:02:25.644262 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:02:25.644277 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:02:25.644290 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:02:25.644308 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:02:25.644319 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:02:25.644332 systemd[1]: Mounting media.mount... Feb 9 19:02:25.644349 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:25.644365 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:02:25.644380 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:02:25.644395 systemd[1]: Mounting tmp.mount... Feb 9 19:02:25.644444 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:02:25.646485 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:02:25.646509 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:02:25.646527 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:02:25.646544 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:02:25.646561 systemd[1]: Starting modprobe@drm.service... Feb 9 19:02:25.646578 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:02:25.646594 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:02:25.646611 systemd[1]: Starting modprobe@loop.service... Feb 9 19:02:25.646635 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:02:25.646650 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:02:25.646666 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:02:25.646684 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:02:25.646702 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:02:25.646719 systemd[1]: Stopped systemd-journald.service. Feb 9 19:02:25.646736 systemd[1]: Starting systemd-journald.service... Feb 9 19:02:25.646753 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:02:25.646769 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:02:25.646789 kernel: loop: module loaded Feb 9 19:02:25.646805 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:02:25.646822 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:02:25.646839 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:02:25.646855 systemd[1]: Stopped verity-setup.service. Feb 9 19:02:25.646872 kernel: fuse: init (API version 7.34) Feb 9 19:02:25.646888 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:02:25.646905 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:02:25.652741 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:02:25.652776 systemd[1]: Mounted media.mount. Feb 9 19:02:25.652795 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:02:25.652814 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:02:25.652833 systemd[1]: Mounted tmp.mount. Feb 9 19:02:25.652852 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:02:25.652868 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:02:25.652882 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:02:25.652899 systemd-journald[1164]: Journal started Feb 9 19:02:25.652958 systemd-journald[1164]: Runtime Journal (/run/log/journal/be6fd7f760db4d1c8aa1ebad96e2d386) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:02:14.183000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:02:14.897000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:14.914000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:14.914000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:02:14.914000 audit: BPF prog-id=10 op=LOAD Feb 9 19:02:14.914000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:02:14.914000 audit: BPF prog-id=11 op=LOAD Feb 9 19:02:14.914000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:02:16.341000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:02:16.341000 audit[1064]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c0001178d2 a1=c00002ae58 a2=c000029100 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:16.341000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:16.348000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:02:16.348000 audit[1064]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c0001179a9 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:16.348000 audit: CWD cwd="/" Feb 9 19:02:16.348000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:16.348000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:16.348000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:02:25.122000 audit: BPF prog-id=12 op=LOAD Feb 9 19:02:25.122000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:02:25.132000 audit: BPF prog-id=13 op=LOAD Feb 9 19:02:25.137000 audit: BPF prog-id=14 op=LOAD Feb 9 19:02:25.137000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:02:25.137000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:02:25.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.174000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:02:25.182000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.513000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.523000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.531000 audit: BPF prog-id=15 op=LOAD Feb 9 19:02:25.531000 audit: BPF prog-id=16 op=LOAD Feb 9 19:02:25.531000 audit: BPF prog-id=17 op=LOAD Feb 9 19:02:25.531000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:02:25.531000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:02:25.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.639000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:02:25.639000 audit[1164]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7fff3b35f140 a2=4000 a3=7fff3b35f1dc items=0 ppid=1 pid=1164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:25.639000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:02:25.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:16.325420 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:02:25.120688 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:02:16.326015 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:25.138595 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:02:16.326033 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:16.326067 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:02:16.326080 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:02:16.326120 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:02:16.326133 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:02:16.326333 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:02:16.326369 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:02:16.326381 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:02:16.326759 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:02:16.326793 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:02:16.326812 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:02:16.326826 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:02:16.326842 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:02:16.326855 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:16Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:02:23.991089 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:23.991379 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:23.991714 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:23.992235 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:02:23.992316 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:02:23.992419 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-09T19:02:23Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:02:25.660388 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:02:25.661000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.666470 systemd[1]: Started systemd-journald.service. Feb 9 19:02:25.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.668201 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:02:25.668348 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:02:25.670000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.670000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.670727 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:02:25.670865 systemd[1]: Finished modprobe@drm.service. Feb 9 19:02:25.672000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.672000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.673060 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:02:25.673195 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:02:25.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.674000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.675671 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:02:25.675810 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:02:25.677000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.678073 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:02:25.678209 systemd[1]: Finished modprobe@loop.service. Feb 9 19:02:25.679000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.679000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.680422 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:02:25.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.683024 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:02:25.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.685670 systemd[1]: Reached target network-pre.target. Feb 9 19:02:25.689588 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:02:25.693992 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:02:25.697233 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:02:25.716377 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:02:25.720387 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:02:25.722889 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:02:25.724627 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:02:25.726769 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:02:25.728703 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:02:25.735268 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:02:25.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.737753 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:02:25.740191 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:02:25.743810 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:02:25.758490 systemd-journald[1164]: Time spent on flushing to /var/log/journal/be6fd7f760db4d1c8aa1ebad96e2d386 is 24.755ms for 1191 entries. Feb 9 19:02:25.758490 systemd-journald[1164]: System Journal (/var/log/journal/be6fd7f760db4d1c8aa1ebad96e2d386) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:02:25.862001 systemd-journald[1164]: Received client request to flush runtime journal. Feb 9 19:02:25.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.787000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.778173 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:02:25.863277 udevadm[1188]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:02:25.780659 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:02:25.785844 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:02:25.789866 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:02:25.863171 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:02:25.865000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.874000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:25.872271 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:02:26.381959 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:02:26.383000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:26.385400 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:02:26.747959 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:02:26.750000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.166440 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:02:27.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.169000 audit: BPF prog-id=18 op=LOAD Feb 9 19:02:27.169000 audit: BPF prog-id=19 op=LOAD Feb 9 19:02:27.169000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:02:27.169000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:02:27.170321 systemd[1]: Starting systemd-udevd.service... Feb 9 19:02:27.188110 systemd-udevd[1193]: Using default interface naming scheme 'v252'. Feb 9 19:02:27.394000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:27.395000 audit: BPF prog-id=20 op=LOAD Feb 9 19:02:27.392468 systemd[1]: Started systemd-udevd.service. Feb 9 19:02:27.397428 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:27.436385 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:02:27.499000 audit: BPF prog-id=21 op=LOAD Feb 9 19:02:27.499000 audit: BPF prog-id=22 op=LOAD Feb 9 19:02:27.499000 audit: BPF prog-id=23 op=LOAD Feb 9 19:02:27.501166 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:02:27.529608 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:02:27.529714 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:02:27.543468 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:02:27.543562 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:02:27.543589 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:02:28.941289 systemd[1]: Started systemd-userdbd.service. Feb 9 19:02:28.943329 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:02:28.942000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:28.947637 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:02:27.521000 audit[1218]: AVC avc: denied { confidentiality } for pid=1218 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:02:28.970596 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:02:28.970673 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:02:28.978351 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:02:28.978458 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:02:28.984203 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:02:28.989701 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:02:27.521000 audit[1218]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=5563497f1210 a1=f884 a2=7f6c1f92cbc5 a3=5 items=12 ppid=1193 pid=1218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:27.521000 audit: CWD cwd="/" Feb 9 19:02:27.521000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=1 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=2 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=3 name=(null) inode=15660 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=4 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=5 name=(null) inode=15661 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=6 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=7 name=(null) inode=15662 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=8 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=9 name=(null) inode=15663 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=10 name=(null) inode=15659 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PATH item=11 name=(null) inode=15664 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:02:27.521000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:02:29.139574 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1207) Feb 9 19:02:29.161569 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:02:29.200692 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:02:29.270932 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:02:29.273000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.275005 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:02:29.361282 systemd-networkd[1205]: lo: Link UP Feb 9 19:02:29.361295 systemd-networkd[1205]: lo: Gained carrier Feb 9 19:02:29.361906 systemd-networkd[1205]: Enumeration completed Feb 9 19:02:29.362027 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:29.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.365561 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:29.408947 systemd-networkd[1205]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:29.466581 kernel: mlx5_core 01cf:00:02.0 enP463s1: Link up Feb 9 19:02:29.508586 kernel: hv_netvsc 002248a0-c225-0022-48a0-c225002248a0 eth0: Data path switched to VF: enP463s1 Feb 9 19:02:29.509765 systemd-networkd[1205]: enP463s1: Link UP Feb 9 19:02:29.510110 systemd-networkd[1205]: eth0: Link UP Feb 9 19:02:29.510213 systemd-networkd[1205]: eth0: Gained carrier Feb 9 19:02:29.515859 systemd-networkd[1205]: enP463s1: Gained carrier Feb 9 19:02:29.546712 systemd-networkd[1205]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:29.635775 lvm[1269]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:29.665729 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:02:29.667000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.668284 systemd[1]: Reached target cryptsetup.target. Feb 9 19:02:29.671894 systemd[1]: Starting lvm2-activation.service... Feb 9 19:02:29.676390 lvm[1271]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:02:29.699630 systemd[1]: Finished lvm2-activation.service. Feb 9 19:02:29.701000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.701947 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:02:29.704050 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:02:29.704086 systemd[1]: Reached target local-fs.target. Feb 9 19:02:29.705948 systemd[1]: Reached target machines.target. Feb 9 19:02:29.709049 systemd[1]: Starting ldconfig.service... Feb 9 19:02:29.711118 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.711223 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:29.712414 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:02:29.715696 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:02:29.719370 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:02:29.721354 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.721445 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:02:29.722658 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:02:29.749875 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1273 (bootctl) Feb 9 19:02:29.751305 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:02:29.769720 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:02:29.772840 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:02:29.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:29.826259 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:02:29.889193 systemd-tmpfiles[1276]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:02:30.022100 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:02:30.022865 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:02:30.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.648578 systemd-fsck[1281]: fsck.fat 4.2 (2021-01-31) Feb 9 19:02:30.648578 systemd-fsck[1281]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:02:30.651117 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:02:30.650000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.656569 systemd[1]: Mounting boot.mount... Feb 9 19:02:30.673046 systemd[1]: Mounted boot.mount. Feb 9 19:02:30.686704 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:02:30.688000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.861881 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:02:30.864000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.866063 systemd[1]: Starting audit-rules.service... Feb 9 19:02:30.869358 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:02:30.873253 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:02:30.878000 audit: BPF prog-id=24 op=LOAD Feb 9 19:02:30.879826 systemd[1]: Starting systemd-resolved.service... Feb 9 19:02:30.882000 audit: BPF prog-id=25 op=LOAD Feb 9 19:02:30.883467 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:02:30.886439 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:02:30.918000 audit[1293]: SYSTEM_BOOT pid=1293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.923184 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:02:30.925000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.970430 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:02:30.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.973205 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:02:30.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:30.975383 systemd[1]: Reached target time-set.target. Feb 9 19:02:30.977174 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:02:30.997169 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:02:31.001000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.055016 systemd-resolved[1291]: Positive Trust Anchors: Feb 9 19:02:31.055036 systemd-resolved[1291]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:02:31.055074 systemd-resolved[1291]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:02:31.104708 systemd-networkd[1205]: eth0: Gained IPv6LL Feb 9 19:02:31.106533 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:31.108000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:02:31.125401 systemd-timesyncd[1292]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:02:31.125523 systemd-timesyncd[1292]: Initial clock synchronization to Fri 2024-02-09 19:02:31.129969 UTC. Feb 9 19:02:31.167000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:02:31.167000 audit[1308]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7fff7118f1a0 a2=420 a3=0 items=0 ppid=1287 pid=1308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:02:31.167000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:02:31.168513 augenrules[1308]: No rules Feb 9 19:02:31.169013 systemd[1]: Finished audit-rules.service. Feb 9 19:02:31.195934 systemd-resolved[1291]: Using system hostname 'ci-3510.3.2-a-54659eee1f'. Feb 9 19:02:31.197484 systemd[1]: Started systemd-resolved.service. Feb 9 19:02:31.199981 systemd[1]: Reached target network.target. Feb 9 19:02:31.203937 systemd[1]: Reached target network-online.target. Feb 9 19:02:31.206234 systemd[1]: Reached target nss-lookup.target. Feb 9 19:02:37.288351 ldconfig[1272]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:02:37.299184 systemd[1]: Finished ldconfig.service. Feb 9 19:02:37.302843 systemd[1]: Starting systemd-update-done.service... Feb 9 19:02:37.309475 systemd[1]: Finished systemd-update-done.service. Feb 9 19:02:37.311933 systemd[1]: Reached target sysinit.target. Feb 9 19:02:37.313882 systemd[1]: Started motdgen.path. Feb 9 19:02:37.315447 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:02:37.318356 systemd[1]: Started logrotate.timer. Feb 9 19:02:37.320121 systemd[1]: Started mdadm.timer. Feb 9 19:02:37.321619 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:02:37.323619 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:02:37.323655 systemd[1]: Reached target paths.target. Feb 9 19:02:37.325367 systemd[1]: Reached target timers.target. Feb 9 19:02:37.328603 systemd[1]: Listening on dbus.socket. Feb 9 19:02:37.331928 systemd[1]: Starting docker.socket... Feb 9 19:02:37.337003 systemd[1]: Listening on sshd.socket. Feb 9 19:02:37.339791 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:37.340277 systemd[1]: Listening on docker.socket. Feb 9 19:02:37.342031 systemd[1]: Reached target sockets.target. Feb 9 19:02:37.343821 systemd[1]: Reached target basic.target. Feb 9 19:02:37.345519 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:37.345566 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:02:37.346634 systemd[1]: Starting containerd.service... Feb 9 19:02:37.350456 systemd[1]: Starting dbus.service... Feb 9 19:02:37.353064 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:02:37.356001 systemd[1]: Starting extend-filesystems.service... Feb 9 19:02:37.362083 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:02:37.364008 systemd[1]: Starting motdgen.service... Feb 9 19:02:37.367104 systemd[1]: Started nvidia.service. Feb 9 19:02:37.370976 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:02:37.374247 systemd[1]: Starting prepare-critools.service... Feb 9 19:02:37.377216 systemd[1]: Starting prepare-helm.service... Feb 9 19:02:37.379903 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:02:37.383249 systemd[1]: Starting sshd-keygen.service... Feb 9 19:02:37.390943 systemd[1]: Starting systemd-logind.service... Feb 9 19:02:37.394112 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:02:37.394200 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:02:37.394760 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:02:37.395617 systemd[1]: Starting update-engine.service... Feb 9 19:02:37.398830 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:02:37.410121 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:02:37.410458 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:02:37.500873 jq[1337]: true Feb 9 19:02:37.503250 jq[1318]: false Feb 9 19:02:37.505158 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:02:37.505421 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda1 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda2 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda3 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found usr Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda4 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda6 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda7 Feb 9 19:02:37.527602 extend-filesystems[1319]: Found sda9 Feb 9 19:02:37.527602 extend-filesystems[1319]: Checking size of /dev/sda9 Feb 9 19:02:37.622724 tar[1340]: crictl Feb 9 19:02:37.623108 tar[1341]: linux-amd64/helm Feb 9 19:02:37.543065 systemd-logind[1334]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:02:37.624157 extend-filesystems[1319]: Old size kept for /dev/sda9 Feb 9 19:02:37.624157 extend-filesystems[1319]: Found sr0 Feb 9 19:02:37.636095 jq[1346]: true Feb 9 19:02:37.636202 tar[1339]: ./ Feb 9 19:02:37.636202 tar[1339]: ./loopback Feb 9 19:02:37.557710 systemd-logind[1334]: New seat seat0. Feb 9 19:02:37.576869 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:02:37.577031 systemd[1]: Finished motdgen.service. Feb 9 19:02:37.604086 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:02:37.604262 systemd[1]: Finished extend-filesystems.service. Feb 9 19:02:37.672567 dbus-daemon[1317]: [system] SELinux support is enabled Feb 9 19:02:37.672764 systemd[1]: Started dbus.service. Feb 9 19:02:37.677142 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:02:37.677178 systemd[1]: Reached target system-config.target. Feb 9 19:02:37.679508 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:02:37.679537 systemd[1]: Reached target user-config.target. Feb 9 19:02:37.683391 systemd[1]: Started systemd-logind.service. Feb 9 19:02:37.684434 dbus-daemon[1317]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:02:37.703000 env[1356]: time="2024-02-09T19:02:37.702950824Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:02:37.711412 tar[1339]: ./bandwidth Feb 9 19:02:37.749306 bash[1383]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:02:37.750331 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:02:37.801654 env[1356]: time="2024-02-09T19:02:37.801596090Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:02:37.810650 env[1356]: time="2024-02-09T19:02:37.810608787Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.812369 env[1356]: time="2024-02-09T19:02:37.812325548Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:37.817028 env[1356]: time="2024-02-09T19:02:37.817002133Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.817391 env[1356]: time="2024-02-09T19:02:37.817365009Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:37.817498 env[1356]: time="2024-02-09T19:02:37.817482034Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.817587 env[1356]: time="2024-02-09T19:02:37.817571853Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:02:37.817660 env[1356]: time="2024-02-09T19:02:37.817645068Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.817816 env[1356]: time="2024-02-09T19:02:37.817801301Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.818139 env[1356]: time="2024-02-09T19:02:37.818119168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:02:37.818413 env[1356]: time="2024-02-09T19:02:37.818391125Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:02:37.818495 env[1356]: time="2024-02-09T19:02:37.818478944Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:02:37.818636 env[1356]: time="2024-02-09T19:02:37.818617173Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:02:37.818714 env[1356]: time="2024-02-09T19:02:37.818702391Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830011571Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830043678Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830060482Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830109592Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830131297Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830186608Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830204712Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830220815Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830238019Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830253222Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830269226Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830285529Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830394952Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:02:37.832192 env[1356]: time="2024-02-09T19:02:37.830472568Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.830882255Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.830921163Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.830939867Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831005781Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831024085Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831041288Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831109102Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831127706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831144910Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831163514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831180818Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831197521Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831341851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831361856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.832739 env[1356]: time="2024-02-09T19:02:37.831378959Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.833258 env[1356]: time="2024-02-09T19:02:37.831401964Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:02:37.833258 env[1356]: time="2024-02-09T19:02:37.831425769Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:02:37.833258 env[1356]: time="2024-02-09T19:02:37.831441072Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:02:37.833258 env[1356]: time="2024-02-09T19:02:37.831463677Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:02:37.833258 env[1356]: time="2024-02-09T19:02:37.831503485Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:02:37.833490 env[1356]: time="2024-02-09T19:02:37.831762440Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:02:37.833490 env[1356]: time="2024-02-09T19:02:37.831835455Z" level=info msg="Connect containerd service" Feb 9 19:02:37.833490 env[1356]: time="2024-02-09T19:02:37.831871663Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:02:37.850469 systemd[1]: Started containerd.service. Feb 9 19:02:37.873193 env[1356]: time="2024-02-09T19:02:37.848536571Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:02:37.873193 env[1356]: time="2024-02-09T19:02:37.850252832Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:02:37.873193 env[1356]: time="2024-02-09T19:02:37.850314245Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:02:37.873193 env[1356]: time="2024-02-09T19:02:37.854095341Z" level=info msg="containerd successfully booted in 0.151942s" Feb 9 19:02:37.873337 tar[1339]: ./ptp Feb 9 19:02:37.866784 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:02:37.876522 env[1356]: time="2024-02-09T19:02:37.876463750Z" level=info msg="Start subscribing containerd event" Feb 9 19:02:37.876646 env[1356]: time="2024-02-09T19:02:37.876556169Z" level=info msg="Start recovering state" Feb 9 19:02:37.876646 env[1356]: time="2024-02-09T19:02:37.876640687Z" level=info msg="Start event monitor" Feb 9 19:02:37.876725 env[1356]: time="2024-02-09T19:02:37.876663192Z" level=info msg="Start snapshots syncer" Feb 9 19:02:37.876725 env[1356]: time="2024-02-09T19:02:37.876677595Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:02:37.876725 env[1356]: time="2024-02-09T19:02:37.876687897Z" level=info msg="Start streaming server" Feb 9 19:02:38.030509 tar[1339]: ./vlan Feb 9 19:02:38.165783 tar[1339]: ./host-device Feb 9 19:02:38.282586 update_engine[1336]: I0209 19:02:38.281989 1336 main.cc:92] Flatcar Update Engine starting Feb 9 19:02:38.288014 tar[1339]: ./tuning Feb 9 19:02:38.337347 systemd[1]: Started update-engine.service. Feb 9 19:02:38.341967 systemd[1]: Started locksmithd.service. Feb 9 19:02:38.345661 update_engine[1336]: I0209 19:02:38.344928 1336 update_check_scheduler.cc:74] Next update check in 10m0s Feb 9 19:02:38.385123 tar[1341]: linux-amd64/LICENSE Feb 9 19:02:38.385359 tar[1341]: linux-amd64/README.md Feb 9 19:02:38.391735 systemd[1]: Finished prepare-helm.service. Feb 9 19:02:38.402936 tar[1339]: ./vrf Feb 9 19:02:38.485350 tar[1339]: ./sbr Feb 9 19:02:38.561288 tar[1339]: ./tap Feb 9 19:02:38.651125 tar[1339]: ./dhcp Feb 9 19:02:38.692331 systemd[1]: Finished prepare-critools.service. Feb 9 19:02:38.781754 tar[1339]: ./static Feb 9 19:02:38.810629 tar[1339]: ./firewall Feb 9 19:02:38.854889 tar[1339]: ./macvlan Feb 9 19:02:38.895539 tar[1339]: ./dummy Feb 9 19:02:38.936208 tar[1339]: ./bridge Feb 9 19:02:38.979786 tar[1339]: ./ipvlan Feb 9 19:02:39.020445 tar[1339]: ./portmap Feb 9 19:02:39.058978 tar[1339]: ./host-local Feb 9 19:02:39.138161 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:02:39.240816 sshd_keygen[1335]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:02:39.260914 systemd[1]: Finished sshd-keygen.service. Feb 9 19:02:39.264966 systemd[1]: Starting issuegen.service... Feb 9 19:02:39.268140 systemd[1]: Started waagent.service. Feb 9 19:02:39.272300 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:02:39.272499 systemd[1]: Finished issuegen.service. Feb 9 19:02:39.275743 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:02:39.281622 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:02:39.285517 systemd[1]: Started getty@tty1.service. Feb 9 19:02:39.289455 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:02:39.291954 systemd[1]: Reached target getty.target. Feb 9 19:02:39.295963 systemd[1]: Reached target multi-user.target. Feb 9 19:02:39.299366 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:02:39.306770 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:02:39.306904 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:02:39.309761 systemd[1]: Startup finished in 981ms (firmware) + 27.857s (loader) + 890ms (kernel) + 1min 53.888s (initrd) + 24.328s (userspace) = 2min 47.945s. Feb 9 19:02:39.627488 login[1441]: pam_lastlog(login:session): file /var/log/lastlog is locked/write Feb 9 19:02:39.628222 login[1440]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:39.652738 systemd[1]: Created slice user-500.slice. Feb 9 19:02:39.654147 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:02:39.660332 systemd-logind[1334]: New session 1 of user core. Feb 9 19:02:39.665995 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:02:39.667767 systemd[1]: Starting user@500.service... Feb 9 19:02:39.691520 (systemd)[1447]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:02:39.809270 systemd[1447]: Queued start job for default target default.target. Feb 9 19:02:39.810036 systemd[1447]: Reached target paths.target. Feb 9 19:02:39.810209 systemd[1447]: Reached target sockets.target. Feb 9 19:02:39.810330 systemd[1447]: Reached target timers.target. Feb 9 19:02:39.810418 systemd[1447]: Reached target basic.target. Feb 9 19:02:39.810609 systemd[1]: Started user@500.service. Feb 9 19:02:39.811529 systemd[1]: Started session-1.scope. Feb 9 19:02:39.811807 systemd[1447]: Reached target default.target. Feb 9 19:02:39.811938 systemd[1447]: Startup finished in 113ms. Feb 9 19:02:40.278670 locksmithd[1422]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:02:40.629155 login[1441]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:02:40.634030 systemd[1]: Started session-2.scope. Feb 9 19:02:40.634537 systemd-logind[1334]: New session 2 of user core. Feb 9 19:02:45.778863 waagent[1435]: 2024-02-09T19:02:45.778720Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:02:45.783900 waagent[1435]: 2024-02-09T19:02:45.783803Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:02:45.788748 waagent[1435]: 2024-02-09T19:02:45.785241Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:02:45.788748 waagent[1435]: 2024-02-09T19:02:45.786907Z INFO Daemon Daemon Run daemon Feb 9 19:02:45.788748 waagent[1435]: 2024-02-09T19:02:45.788375Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:02:45.799092 waagent[1435]: 2024-02-09T19:02:45.798971Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:45.805703 waagent[1435]: 2024-02-09T19:02:45.805595Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.806800Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.807510Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.808844Z INFO Daemon Daemon Activate resource disk Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.809479Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.817189Z INFO Daemon Daemon Found device: None Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.818035Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.818803Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.820483Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.821439Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.830708Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.833402Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.834291Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:02:45.843934 waagent[1435]: 2024-02-09T19:02:45.835066Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:02:45.944289 waagent[1435]: 2024-02-09T19:02:45.940417Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:02:46.073675 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:02:46.096393 waagent[1435]: 2024-02-09T19:02:46.096263Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:02:46.112168 waagent[1435]: 2024-02-09T19:02:46.098393Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:02:46.112168 waagent[1435]: 2024-02-09T19:02:46.099283Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:02:46.112168 waagent[1435]: 2024-02-09T19:02:46.100075Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:02:46.112168 waagent[1435]: 2024-02-09T19:02:46.101091Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:02:46.112168 waagent[1435]: 2024-02-09T19:02:46.101702Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:02:46.268075 waagent[1435]: 2024-02-09T19:02:46.267996Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:02:46.275070 waagent[1435]: 2024-02-09T19:02:46.269775Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:02:46.275070 waagent[1435]: 2024-02-09T19:02:46.270377Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:02:46.557351 waagent[1435]: 2024-02-09T19:02:46.557192Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:02:46.566955 waagent[1435]: 2024-02-09T19:02:46.566870Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:02:46.571767 waagent[1435]: 2024-02-09T19:02:46.568202Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:02:46.664151 waagent[1435]: 2024-02-09T19:02:46.664018Z INFO Daemon Daemon Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:46.673428 waagent[1435]: 2024-02-09T19:02:46.665308Z INFO Daemon Daemon Certificate with thumbprint 94361C3D0BDA042A39304137532E2F9AE8C36DA0 has no matching private key. Feb 9 19:02:46.673428 waagent[1435]: 2024-02-09T19:02:46.666089Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:02:46.703275 waagent[1435]: 2024-02-09T19:02:46.703184Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 660a60ad-7405-46ed-a1c1-041da19ae389 New eTag: 17485689131444294394] Feb 9 19:02:46.710282 waagent[1435]: 2024-02-09T19:02:46.705107Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:46.715136 waagent[1435]: 2024-02-09T19:02:46.715066Z INFO Daemon Daemon Starting provisioning Feb 9 19:02:46.721141 waagent[1435]: 2024-02-09T19:02:46.716202Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:02:46.721141 waagent[1435]: 2024-02-09T19:02:46.717043Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-54659eee1f] Feb 9 19:02:46.738518 waagent[1435]: 2024-02-09T19:02:46.738373Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-54659eee1f] Feb 9 19:02:46.745853 waagent[1435]: 2024-02-09T19:02:46.740021Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:02:46.745853 waagent[1435]: 2024-02-09T19:02:46.741296Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:02:46.755190 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:02:46.755434 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:02:46.755507 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:02:46.755894 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:02:46.760634 systemd-networkd[1205]: eth0: DHCPv6 lease lost Feb 9 19:02:46.763023 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:02:46.763172 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:02:46.765486 systemd[1]: Starting systemd-networkd.service... Feb 9 19:02:46.796203 systemd-networkd[1492]: enP463s1: Link UP Feb 9 19:02:46.796214 systemd-networkd[1492]: enP463s1: Gained carrier Feb 9 19:02:46.797682 systemd-networkd[1492]: eth0: Link UP Feb 9 19:02:46.797691 systemd-networkd[1492]: eth0: Gained carrier Feb 9 19:02:46.798121 systemd-networkd[1492]: lo: Link UP Feb 9 19:02:46.798130 systemd-networkd[1492]: lo: Gained carrier Feb 9 19:02:46.798444 systemd-networkd[1492]: eth0: Gained IPv6LL Feb 9 19:02:46.798741 systemd-networkd[1492]: Enumeration completed Feb 9 19:02:46.798842 systemd[1]: Started systemd-networkd.service. Feb 9 19:02:46.801218 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:02:46.809380 waagent[1435]: 2024-02-09T19:02:46.805685Z INFO Daemon Daemon Create user account if not exists Feb 9 19:02:46.809380 waagent[1435]: 2024-02-09T19:02:46.807434Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:02:46.809380 waagent[1435]: 2024-02-09T19:02:46.808296Z INFO Daemon Daemon Configure sudoer Feb 9 19:02:46.810102 waagent[1435]: 2024-02-09T19:02:46.810043Z INFO Daemon Daemon Configure sshd Feb 9 19:02:46.811000 waagent[1435]: 2024-02-09T19:02:46.810948Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:02:46.813244 systemd-networkd[1492]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:02:46.843635 systemd-networkd[1492]: eth0: DHCPv4 address 10.200.8.4/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:02:46.848245 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:02:48.078090 waagent[1435]: 2024-02-09T19:02:48.078004Z INFO Daemon Daemon Provisioning complete Feb 9 19:02:48.095957 waagent[1435]: 2024-02-09T19:02:48.095866Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:02:48.102369 waagent[1435]: 2024-02-09T19:02:48.097195Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:02:48.102369 waagent[1435]: 2024-02-09T19:02:48.098953Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:02:48.363975 waagent[1501]: 2024-02-09T19:02:48.363799Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:02:48.364729 waagent[1501]: 2024-02-09T19:02:48.364661Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:48.364882 waagent[1501]: 2024-02-09T19:02:48.364826Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:48.375762 waagent[1501]: 2024-02-09T19:02:48.375691Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:02:48.375922 waagent[1501]: 2024-02-09T19:02:48.375869Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:02:48.435038 waagent[1501]: 2024-02-09T19:02:48.434913Z INFO ExtHandler ExtHandler Found private key matching thumbprint 72599646ED232C05D754C75EB4D54D781DD81FA4 Feb 9 19:02:48.435265 waagent[1501]: 2024-02-09T19:02:48.435202Z INFO ExtHandler ExtHandler Certificate with thumbprint 94361C3D0BDA042A39304137532E2F9AE8C36DA0 has no matching private key. Feb 9 19:02:48.435503 waagent[1501]: 2024-02-09T19:02:48.435453Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:02:48.449575 waagent[1501]: 2024-02-09T19:02:48.449504Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 9430eb69-882a-4c6d-8095-51cd9cf8392e New eTag: 17485689131444294394] Feb 9 19:02:48.450154 waagent[1501]: 2024-02-09T19:02:48.450096Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:02:48.990811 waagent[1501]: 2024-02-09T19:02:48.990671Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:49.046414 waagent[1501]: 2024-02-09T19:02:49.046296Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1501 Feb 9 19:02:49.050151 waagent[1501]: 2024-02-09T19:02:49.050075Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:49.051392 waagent[1501]: 2024-02-09T19:02:49.051328Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:49.168820 waagent[1501]: 2024-02-09T19:02:49.168744Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:49.169306 waagent[1501]: 2024-02-09T19:02:49.169229Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:49.177287 waagent[1501]: 2024-02-09T19:02:49.177234Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:49.177753 waagent[1501]: 2024-02-09T19:02:49.177698Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:49.178785 waagent[1501]: 2024-02-09T19:02:49.178722Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:02:49.180006 waagent[1501]: 2024-02-09T19:02:49.179947Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:49.180343 waagent[1501]: 2024-02-09T19:02:49.180289Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:49.181192 waagent[1501]: 2024-02-09T19:02:49.181138Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:49.181338 waagent[1501]: 2024-02-09T19:02:49.181283Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:49.181779 waagent[1501]: 2024-02-09T19:02:49.181726Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:49.182352 waagent[1501]: 2024-02-09T19:02:49.182299Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:49.182643 waagent[1501]: 2024-02-09T19:02:49.182592Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:49.183288 waagent[1501]: 2024-02-09T19:02:49.183227Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:49.183432 waagent[1501]: 2024-02-09T19:02:49.183374Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:49.183432 waagent[1501]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:49.183432 waagent[1501]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:49.183432 waagent[1501]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:49.183432 waagent[1501]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:49.183432 waagent[1501]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:49.183432 waagent[1501]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:49.183897 waagent[1501]: 2024-02-09T19:02:49.183843Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:49.186228 waagent[1501]: 2024-02-09T19:02:49.186137Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:49.186815 waagent[1501]: 2024-02-09T19:02:49.186761Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:49.187395 waagent[1501]: 2024-02-09T19:02:49.187334Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:49.187609 waagent[1501]: 2024-02-09T19:02:49.187543Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:49.187702 waagent[1501]: 2024-02-09T19:02:49.187643Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:49.189607 waagent[1501]: 2024-02-09T19:02:49.189528Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:49.197145 waagent[1501]: 2024-02-09T19:02:49.197091Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:02:49.197716 waagent[1501]: 2024-02-09T19:02:49.197668Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:49.200065 waagent[1501]: 2024-02-09T19:02:49.200013Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:02:49.216203 waagent[1501]: 2024-02-09T19:02:49.216137Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1492' Feb 9 19:02:49.252447 waagent[1501]: 2024-02-09T19:02:49.252302Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:02:49.325171 waagent[1501]: 2024-02-09T19:02:49.325037Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:49.325171 waagent[1501]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:49.325171 waagent[1501]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:49.325171 waagent[1501]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:c2:25 brd ff:ff:ff:ff:ff:ff Feb 9 19:02:49.325171 waagent[1501]: 3: enP463s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:c2:25 brd ff:ff:ff:ff:ff:ff\ altname enP463p0s2 Feb 9 19:02:49.325171 waagent[1501]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:49.325171 waagent[1501]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:49.325171 waagent[1501]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:49.325171 waagent[1501]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:49.325171 waagent[1501]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:49.325171 waagent[1501]: 2: eth0 inet6 fe80::222:48ff:fea0:c225/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:49.507377 waagent[1501]: 2024-02-09T19:02:49.507198Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:02:49.510404 waagent[1501]: 2024-02-09T19:02:49.510303Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:02:49.510404 waagent[1501]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:49.510404 waagent[1501]: pkts bytes target prot opt in out source destination Feb 9 19:02:49.510404 waagent[1501]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:49.510404 waagent[1501]: pkts bytes target prot opt in out source destination Feb 9 19:02:49.510404 waagent[1501]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:49.510404 waagent[1501]: pkts bytes target prot opt in out source destination Feb 9 19:02:49.510404 waagent[1501]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:49.510404 waagent[1501]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:49.511777 waagent[1501]: 2024-02-09T19:02:49.511722Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:02:49.611530 waagent[1501]: 2024-02-09T19:02:49.611454Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:02:50.104012 waagent[1435]: 2024-02-09T19:02:50.103868Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:02:50.109114 waagent[1435]: 2024-02-09T19:02:50.109049Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:02:51.117762 waagent[1541]: 2024-02-09T19:02:51.117654Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:02:51.118450 waagent[1541]: 2024-02-09T19:02:51.118383Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:02:51.118618 waagent[1541]: 2024-02-09T19:02:51.118542Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:02:51.128131 waagent[1541]: 2024-02-09T19:02:51.128022Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:02:51.128520 waagent[1541]: 2024-02-09T19:02:51.128459Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:51.128704 waagent[1541]: 2024-02-09T19:02:51.128651Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:51.140206 waagent[1541]: 2024-02-09T19:02:51.140126Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:02:51.152608 waagent[1541]: 2024-02-09T19:02:51.152524Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:02:51.153636 waagent[1541]: 2024-02-09T19:02:51.153574Z INFO ExtHandler Feb 9 19:02:51.153803 waagent[1541]: 2024-02-09T19:02:51.153751Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 1fc326e6-679f-4305-88ce-88a21d32304a eTag: 17485689131444294394 source: Fabric] Feb 9 19:02:51.154513 waagent[1541]: 2024-02-09T19:02:51.154453Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:02:51.155629 waagent[1541]: 2024-02-09T19:02:51.155569Z INFO ExtHandler Feb 9 19:02:51.155770 waagent[1541]: 2024-02-09T19:02:51.155720Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:02:51.162540 waagent[1541]: 2024-02-09T19:02:51.162484Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:02:51.163000 waagent[1541]: 2024-02-09T19:02:51.162950Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:02:51.184088 waagent[1541]: 2024-02-09T19:02:51.184001Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:02:51.247767 waagent[1541]: 2024-02-09T19:02:51.247643Z INFO ExtHandler Downloaded certificate {'thumbprint': '94361C3D0BDA042A39304137532E2F9AE8C36DA0', 'hasPrivateKey': False} Feb 9 19:02:51.248750 waagent[1541]: 2024-02-09T19:02:51.248680Z INFO ExtHandler Downloaded certificate {'thumbprint': '72599646ED232C05D754C75EB4D54D781DD81FA4', 'hasPrivateKey': True} Feb 9 19:02:51.249714 waagent[1541]: 2024-02-09T19:02:51.249654Z INFO ExtHandler Fetch goal state completed Feb 9 19:02:51.271191 waagent[1541]: 2024-02-09T19:02:51.271119Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1541 Feb 9 19:02:51.274385 waagent[1541]: 2024-02-09T19:02:51.274318Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:02:51.275810 waagent[1541]: 2024-02-09T19:02:51.275753Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:02:51.280632 waagent[1541]: 2024-02-09T19:02:51.280576Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:02:51.280978 waagent[1541]: 2024-02-09T19:02:51.280922Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:02:51.288732 waagent[1541]: 2024-02-09T19:02:51.288679Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:02:51.289177 waagent[1541]: 2024-02-09T19:02:51.289122Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:02:51.316422 waagent[1541]: 2024-02-09T19:02:51.316301Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:02:51.319465 waagent[1541]: 2024-02-09T19:02:51.319355Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:02:51.324237 waagent[1541]: 2024-02-09T19:02:51.324170Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:02:51.325672 waagent[1541]: 2024-02-09T19:02:51.325610Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:02:51.326090 waagent[1541]: 2024-02-09T19:02:51.326035Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:51.326248 waagent[1541]: 2024-02-09T19:02:51.326199Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:51.326809 waagent[1541]: 2024-02-09T19:02:51.326750Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:02:51.327346 waagent[1541]: 2024-02-09T19:02:51.327290Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:02:51.327736 waagent[1541]: 2024-02-09T19:02:51.327679Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:02:51.328010 waagent[1541]: 2024-02-09T19:02:51.327957Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:02:51.328346 waagent[1541]: 2024-02-09T19:02:51.328294Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:02:51.328346 waagent[1541]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:02:51.328346 waagent[1541]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:02:51.328346 waagent[1541]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:02:51.328346 waagent[1541]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:51.328346 waagent[1541]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:51.328346 waagent[1541]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:02:51.328695 waagent[1541]: 2024-02-09T19:02:51.328619Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:02:51.328898 waagent[1541]: 2024-02-09T19:02:51.328845Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:02:51.332514 waagent[1541]: 2024-02-09T19:02:51.332264Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:02:51.333416 waagent[1541]: 2024-02-09T19:02:51.333337Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:02:51.333663 waagent[1541]: 2024-02-09T19:02:51.333592Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:02:51.333816 waagent[1541]: 2024-02-09T19:02:51.333741Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:02:51.335663 waagent[1541]: 2024-02-09T19:02:51.335609Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:02:51.335855 waagent[1541]: 2024-02-09T19:02:51.335770Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:02:51.363395 waagent[1541]: 2024-02-09T19:02:51.363273Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:02:51.363987 waagent[1541]: 2024-02-09T19:02:51.363919Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:02:51.366173 waagent[1541]: 2024-02-09T19:02:51.366113Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:02:51.366173 waagent[1541]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:02:51.366173 waagent[1541]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:02:51.366173 waagent[1541]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:c2:25 brd ff:ff:ff:ff:ff:ff Feb 9 19:02:51.366173 waagent[1541]: 3: enP463s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a0:c2:25 brd ff:ff:ff:ff:ff:ff\ altname enP463p0s2 Feb 9 19:02:51.366173 waagent[1541]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:02:51.366173 waagent[1541]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:02:51.366173 waagent[1541]: 2: eth0 inet 10.200.8.4/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:02:51.366173 waagent[1541]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:02:51.366173 waagent[1541]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:02:51.366173 waagent[1541]: 2: eth0 inet6 fe80::222:48ff:fea0:c225/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:02:51.427772 waagent[1541]: 2024-02-09T19:02:51.427619Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:02:51.427772 waagent[1541]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:51.427772 waagent[1541]: pkts bytes target prot opt in out source destination Feb 9 19:02:51.427772 waagent[1541]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:51.427772 waagent[1541]: pkts bytes target prot opt in out source destination Feb 9 19:02:51.427772 waagent[1541]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:02:51.427772 waagent[1541]: pkts bytes target prot opt in out source destination Feb 9 19:02:51.427772 waagent[1541]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:02:51.427772 waagent[1541]: 130 14641 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:02:51.427772 waagent[1541]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:02:51.438929 waagent[1541]: 2024-02-09T19:02:51.438873Z INFO ExtHandler ExtHandler Feb 9 19:02:51.439063 waagent[1541]: 2024-02-09T19:02:51.439022Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 1a206396-091a-4152-9cd3-14603559399d correlation b3a46fec-e86e-4be8-849d-6a652e971c2a created: 2024-02-09T18:59:39.405823Z] Feb 9 19:02:51.439880 waagent[1541]: 2024-02-09T19:02:51.439819Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:02:51.441590 waagent[1541]: 2024-02-09T19:02:51.441520Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 9 19:02:51.461369 waagent[1541]: 2024-02-09T19:02:51.461302Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:02:51.470542 waagent[1541]: 2024-02-09T19:02:51.470469Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 940BA438-F29D-4792-B941-9565F3D485C7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:03:17.104730 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:03:17.604714 systemd[1]: Created slice system-sshd.slice. Feb 9 19:03:17.606326 systemd[1]: Started sshd@0-10.200.8.4:22-10.200.12.6:35364.service. Feb 9 19:03:18.478683 sshd[1580]: Accepted publickey for core from 10.200.12.6 port 35364 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:18.480126 sshd[1580]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:18.483672 systemd-logind[1334]: New session 3 of user core. Feb 9 19:03:18.485396 systemd[1]: Started session-3.scope. Feb 9 19:03:19.018445 systemd[1]: Started sshd@1-10.200.8.4:22-10.200.12.6:35372.service. Feb 9 19:03:19.631698 sshd[1585]: Accepted publickey for core from 10.200.12.6 port 35372 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:19.633359 sshd[1585]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:19.638959 systemd[1]: Started session-4.scope. Feb 9 19:03:19.639503 systemd-logind[1334]: New session 4 of user core. Feb 9 19:03:20.069741 sshd[1585]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:20.073091 systemd[1]: sshd@1-10.200.8.4:22-10.200.12.6:35372.service: Deactivated successfully. Feb 9 19:03:20.074135 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:03:20.074924 systemd-logind[1334]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:03:20.075863 systemd-logind[1334]: Removed session 4. Feb 9 19:03:20.172816 systemd[1]: Started sshd@2-10.200.8.4:22-10.200.12.6:35382.service. Feb 9 19:03:20.779627 sshd[1591]: Accepted publickey for core from 10.200.12.6 port 35382 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:20.781737 sshd[1591]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:20.786930 systemd[1]: Started session-5.scope. Feb 9 19:03:20.787372 systemd-logind[1334]: New session 5 of user core. Feb 9 19:03:21.211147 sshd[1591]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:21.214472 systemd[1]: sshd@2-10.200.8.4:22-10.200.12.6:35382.service: Deactivated successfully. Feb 9 19:03:21.216501 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:03:21.217337 systemd-logind[1334]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:03:21.218282 systemd-logind[1334]: Removed session 5. Feb 9 19:03:21.317194 systemd[1]: Started sshd@3-10.200.8.4:22-10.200.12.6:35392.service. Feb 9 19:03:21.934318 sshd[1597]: Accepted publickey for core from 10.200.12.6 port 35392 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:21.936083 sshd[1597]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:21.941575 systemd[1]: Started session-6.scope. Feb 9 19:03:21.942022 systemd-logind[1334]: New session 6 of user core. Feb 9 19:03:22.379113 sshd[1597]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:22.382093 systemd[1]: sshd@3-10.200.8.4:22-10.200.12.6:35392.service: Deactivated successfully. Feb 9 19:03:22.382937 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:03:22.383567 systemd-logind[1334]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:03:22.384293 systemd-logind[1334]: Removed session 6. Feb 9 19:03:22.482605 systemd[1]: Started sshd@4-10.200.8.4:22-10.200.12.6:35398.service. Feb 9 19:03:23.091410 sshd[1603]: Accepted publickey for core from 10.200.12.6 port 35398 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:03:23.092789 sshd[1603]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:03:23.097494 systemd[1]: Started session-7.scope. Feb 9 19:03:23.097944 systemd-logind[1334]: New session 7 of user core. Feb 9 19:03:23.203935 update_engine[1336]: I0209 19:03:23.203858 1336 update_attempter.cc:509] Updating boot flags... Feb 9 19:03:23.687559 sudo[1683]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:03:23.687904 sudo[1683]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:03:24.633427 systemd[1]: Starting docker.service... Feb 9 19:03:24.699217 env[1714]: time="2024-02-09T19:03:24.699170778Z" level=info msg="Starting up" Feb 9 19:03:24.700970 env[1714]: time="2024-02-09T19:03:24.700934596Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:24.700970 env[1714]: time="2024-02-09T19:03:24.700956696Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:24.701121 env[1714]: time="2024-02-09T19:03:24.700978296Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:24.701121 env[1714]: time="2024-02-09T19:03:24.700991597Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:24.706127 env[1714]: time="2024-02-09T19:03:24.706103248Z" level=info msg="parsed scheme: \"unix\"" module=grpc Feb 9 19:03:24.706252 env[1714]: time="2024-02-09T19:03:24.706237850Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Feb 9 19:03:24.706334 env[1714]: time="2024-02-09T19:03:24.706317651Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///var/run/docker/libcontainerd/docker-containerd.sock 0 }] }" module=grpc Feb 9 19:03:24.706390 env[1714]: time="2024-02-09T19:03:24.706379751Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Feb 9 19:03:24.712856 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport964788583-merged.mount: Deactivated successfully. Feb 9 19:03:24.813208 env[1714]: time="2024-02-09T19:03:24.813166934Z" level=info msg="Loading containers: start." Feb 9 19:03:24.915578 kernel: Initializing XFRM netlink socket Feb 9 19:03:24.957799 env[1714]: time="2024-02-09T19:03:24.957754900Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Feb 9 19:03:25.090239 systemd-networkd[1492]: docker0: Link UP Feb 9 19:03:25.105921 env[1714]: time="2024-02-09T19:03:25.105884336Z" level=info msg="Loading containers: done." Feb 9 19:03:25.117120 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1175526303-merged.mount: Deactivated successfully. Feb 9 19:03:25.133687 env[1714]: time="2024-02-09T19:03:25.133643600Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Feb 9 19:03:25.133898 env[1714]: time="2024-02-09T19:03:25.133876402Z" level=info msg="Docker daemon" commit=112bdf3343 graphdriver(s)=overlay2 version=20.10.23 Feb 9 19:03:25.134029 env[1714]: time="2024-02-09T19:03:25.134005603Z" level=info msg="Daemon has completed initialization" Feb 9 19:03:25.162223 systemd[1]: Started docker.service. Feb 9 19:03:25.167066 env[1714]: time="2024-02-09T19:03:25.167010417Z" level=info msg="API listen on /run/docker.sock" Feb 9 19:03:25.188042 systemd[1]: Reloading. Feb 9 19:03:25.283066 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2024-02-09T19:03:25Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:25.283105 /usr/lib/systemd/system-generators/torcx-generator[1846]: time="2024-02-09T19:03:25Z" level=info msg="torcx already run" Feb 9 19:03:25.364389 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:25.364413 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:25.382400 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:25.467048 systemd[1]: Started kubelet.service. Feb 9 19:03:25.534871 kubelet[1905]: E0209 19:03:25.534772 1905 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:25.536655 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:25.536765 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:30.186830 env[1356]: time="2024-02-09T19:03:30.186774175Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\"" Feb 9 19:03:30.748487 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2573424952.mount: Deactivated successfully. Feb 9 19:03:32.591060 env[1356]: time="2024-02-09T19:03:32.591000104Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.595985 env[1356]: time="2024-02-09T19:03:32.595938034Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.600603 env[1356]: time="2024-02-09T19:03:32.600570662Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-apiserver:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.606023 env[1356]: time="2024-02-09T19:03:32.605974995Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-apiserver@sha256:cfcebda74d6e665b68931d3589ee69fde81cd503ff3169888e4502af65579d98,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:32.606756 env[1356]: time="2024-02-09T19:03:32.606720199Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.27.10\" returns image reference \"sha256:7968fc5c824ed95404f421a90882835f250220c0fd799b4fceef340dd5585ed5\"" Feb 9 19:03:32.616675 env[1356]: time="2024-02-09T19:03:32.616646259Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\"" Feb 9 19:03:34.533445 env[1356]: time="2024-02-09T19:03:34.533386991Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.538643 env[1356]: time="2024-02-09T19:03:34.538598318Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.543315 env[1356]: time="2024-02-09T19:03:34.543278043Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-controller-manager:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.548350 env[1356]: time="2024-02-09T19:03:34.548315170Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-controller-manager@sha256:fa168ebca1f6dbfe86ef0a690e007531c1f53569274fc7dc2774fe228b6ce8c2,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:34.549041 env[1356]: time="2024-02-09T19:03:34.549009674Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.27.10\" returns image reference \"sha256:c8134be729ba23c6e0c3e5dd52c393fc8d3cfc688bcec33540f64bb0137b67e0\"" Feb 9 19:03:34.559181 env[1356]: time="2024-02-09T19:03:34.559150428Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\"" Feb 9 19:03:35.690619 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Feb 9 19:03:35.690889 systemd[1]: Stopped kubelet.service. Feb 9 19:03:35.692830 systemd[1]: Started kubelet.service. Feb 9 19:03:35.761416 kubelet[1936]: E0209 19:03:35.761357 1936 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:35.764862 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:35.765023 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:35.768794 env[1356]: time="2024-02-09T19:03:35.768750006Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.773972 env[1356]: time="2024-02-09T19:03:35.773312044Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.777263 env[1356]: time="2024-02-09T19:03:35.777226677Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-scheduler:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.781635 env[1356]: time="2024-02-09T19:03:35.781602914Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-scheduler@sha256:09294de61e63987f181077cbc2f5c82463878af9cd8ecc6110c54150c9ae3143,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:35.782225 env[1356]: time="2024-02-09T19:03:35.782195519Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.27.10\" returns image reference \"sha256:5eed9876e7181341b7015e3486dfd234f8e0d0d7d3d19b1bb971d720cd320975\"" Feb 9 19:03:35.792709 env[1356]: time="2024-02-09T19:03:35.792675608Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\"" Feb 9 19:03:36.914037 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3634763562.mount: Deactivated successfully. Feb 9 19:03:37.444707 env[1356]: time="2024-02-09T19:03:37.444654997Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.448209 env[1356]: time="2024-02-09T19:03:37.448159103Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.451061 env[1356]: time="2024-02-09T19:03:37.451031690Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.27.10,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.453162 env[1356]: time="2024-02-09T19:03:37.453128454Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:d084b53c772f62ec38fddb2348a82d4234016daf6cd43fedbf0b3281f3790f88,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.453645 env[1356]: time="2024-02-09T19:03:37.453612168Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.27.10\" returns image reference \"sha256:db7b01e105753475c198490cf875df1314fd1a599f67ea1b184586cb399e1cae\"" Feb 9 19:03:37.463123 env[1356]: time="2024-02-09T19:03:37.463096256Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Feb 9 19:03:37.901253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1848428511.mount: Deactivated successfully. Feb 9 19:03:37.920172 env[1356]: time="2024-02-09T19:03:37.920125714Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.927645 env[1356]: time="2024-02-09T19:03:37.927605941Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.931075 env[1356]: time="2024-02-09T19:03:37.931046645Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.9,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.935753 env[1356]: time="2024-02-09T19:03:37.935724887Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:37.936152 env[1356]: time="2024-02-09T19:03:37.936123899Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Feb 9 19:03:37.945897 env[1356]: time="2024-02-09T19:03:37.945866895Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\"" Feb 9 19:03:38.699540 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2408619981.mount: Deactivated successfully. Feb 9 19:03:42.763893 env[1356]: time="2024-02-09T19:03:42.763836344Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.769777 env[1356]: time="2024-02-09T19:03:42.769736300Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.773980 env[1356]: time="2024-02-09T19:03:42.773942811Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/etcd:3.5.7-0,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.778182 env[1356]: time="2024-02-09T19:03:42.778150722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/etcd@sha256:51eae8381dcb1078289fa7b4f3df2630cdc18d09fb56f8e56b41c40e191d6c83,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:42.778759 env[1356]: time="2024-02-09T19:03:42.778729037Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.7-0\" returns image reference \"sha256:86b6af7dd652c1b38118be1c338e9354b33469e69a218f7e290a0ca5304ad681\"" Feb 9 19:03:42.788529 env[1356]: time="2024-02-09T19:03:42.788499295Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\"" Feb 9 19:03:43.303388 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1951753177.mount: Deactivated successfully. Feb 9 19:03:43.975364 env[1356]: time="2024-02-09T19:03:43.975300704Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:43.983389 env[1356]: time="2024-02-09T19:03:43.983342211Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:43.987799 env[1356]: time="2024-02-09T19:03:43.987766424Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/coredns/coredns:v1.10.1,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:43.993377 env[1356]: time="2024-02-09T19:03:43.993343667Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:43.993836 env[1356]: time="2024-02-09T19:03:43.993806079Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.10.1\" returns image reference \"sha256:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc\"" Feb 9 19:03:45.940603 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Feb 9 19:03:45.940868 systemd[1]: Stopped kubelet.service. Feb 9 19:03:45.946665 systemd[1]: Started kubelet.service. Feb 9 19:03:46.027876 kubelet[2021]: E0209 19:03:46.027819 2021 run.go:74] "command failed" err="failed to load kubelet config file, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory, path: /var/lib/kubelet/config.yaml" Feb 9 19:03:46.029828 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:03:46.029986 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:03:46.688523 systemd[1]: Stopped kubelet.service. Feb 9 19:03:46.706183 systemd[1]: Reloading. Feb 9 19:03:46.775144 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2024-02-09T19:03:46Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:46.775185 /usr/lib/systemd/system-generators/torcx-generator[2051]: time="2024-02-09T19:03:46Z" level=info msg="torcx already run" Feb 9 19:03:46.877722 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:46.877742 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:46.895503 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:46.985905 systemd[1]: Started kubelet.service. Feb 9 19:03:47.029346 kubelet[2113]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:47.029346 kubelet[2113]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:47.029346 kubelet[2113]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:47.029855 kubelet[2113]: I0209 19:03:47.029394 2113 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:47.406116 kubelet[2113]: I0209 19:03:47.405622 2113 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:03:47.406116 kubelet[2113]: I0209 19:03:47.405650 2113 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:47.406116 kubelet[2113]: I0209 19:03:47.405919 2113 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:03:47.410315 kubelet[2113]: E0209 19:03:47.410288 2113 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.4:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.410479 kubelet[2113]: I0209 19:03:47.410447 2113 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:47.412573 kubelet[2113]: I0209 19:03:47.412542 2113 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:47.412881 kubelet[2113]: I0209 19:03:47.412868 2113 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:47.413028 kubelet[2113]: I0209 19:03:47.413007 2113 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:47.413160 kubelet[2113]: I0209 19:03:47.413152 2113 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:47.413208 kubelet[2113]: I0209 19:03:47.413202 2113 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:03:47.413332 kubelet[2113]: I0209 19:03:47.413323 2113 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:47.419148 kubelet[2113]: I0209 19:03:47.419131 2113 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:03:47.419271 kubelet[2113]: I0209 19:03:47.419262 2113 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:47.419363 kubelet[2113]: I0209 19:03:47.419354 2113 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:03:47.419439 kubelet[2113]: I0209 19:03:47.419431 2113 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:47.419647 kubelet[2113]: W0209 19:03:47.419603 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-54659eee1f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.419726 kubelet[2113]: E0209 19:03:47.419667 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-54659eee1f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.420310 kubelet[2113]: I0209 19:03:47.420295 2113 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:47.420718 kubelet[2113]: W0209 19:03:47.420702 2113 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:03:47.421284 kubelet[2113]: I0209 19:03:47.421267 2113 server.go:1168] "Started kubelet" Feb 9 19:03:47.421476 kubelet[2113]: W0209 19:03:47.421442 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.421584 kubelet[2113]: E0209 19:03:47.421573 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.428236 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:03:47.428320 kubelet[2113]: E0209 19:03:47.425123 2113 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:47.428320 kubelet[2113]: E0209 19:03:47.425160 2113 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:47.428320 kubelet[2113]: E0209 19:03:47.425383 2113 event.go:289] Unable to write event: '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"ci-3510.3.2-a-54659eee1f.17b24723b57eac22", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"ci-3510.3.2-a-54659eee1f", UID:"ci-3510.3.2-a-54659eee1f", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"ci-3510.3.2-a-54659eee1f"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 3, 47, 421244450, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 3, 47, 421244450, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'Post "https://10.200.8.4:6443/api/v1/namespaces/default/events": dial tcp 10.200.8.4:6443: connect: connection refused'(may retry after sleeping) Feb 9 19:03:47.428320 kubelet[2113]: I0209 19:03:47.426259 2113 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:03:47.428481 kubelet[2113]: I0209 19:03:47.426542 2113 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:47.428481 kubelet[2113]: I0209 19:03:47.427187 2113 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:03:47.428645 kubelet[2113]: I0209 19:03:47.428629 2113 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:47.429281 kubelet[2113]: I0209 19:03:47.429130 2113 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:03:47.430439 kubelet[2113]: I0209 19:03:47.430416 2113 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:03:47.431355 kubelet[2113]: W0209 19:03:47.431317 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.431355 kubelet[2113]: E0209 19:03:47.431356 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.431719 kubelet[2113]: E0209 19:03:47.431706 2113 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"ci-3510.3.2-a-54659eee1f\" not found" Feb 9 19:03:47.432001 kubelet[2113]: E0209 19:03:47.431989 2113 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-54659eee1f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="200ms" Feb 9 19:03:47.484221 kubelet[2113]: I0209 19:03:47.484189 2113 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:47.487092 kubelet[2113]: I0209 19:03:47.487065 2113 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:47.487092 kubelet[2113]: I0209 19:03:47.487090 2113 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:47.487261 kubelet[2113]: I0209 19:03:47.487112 2113 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:47.487478 kubelet[2113]: I0209 19:03:47.487457 2113 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:47.487612 kubelet[2113]: I0209 19:03:47.487487 2113 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:03:47.487612 kubelet[2113]: I0209 19:03:47.487516 2113 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:03:47.487612 kubelet[2113]: E0209 19:03:47.487587 2113 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:03:47.488988 kubelet[2113]: W0209 19:03:47.488957 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.489084 kubelet[2113]: E0209 19:03:47.488998 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:47.492256 kubelet[2113]: I0209 19:03:47.492235 2113 policy_none.go:49] "None policy: Start" Feb 9 19:03:47.492803 kubelet[2113]: I0209 19:03:47.492791 2113 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:47.492977 kubelet[2113]: I0209 19:03:47.492951 2113 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:47.499605 systemd[1]: Created slice kubepods.slice. Feb 9 19:03:47.503859 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:03:47.510460 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:03:47.512150 kubelet[2113]: I0209 19:03:47.512129 2113 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:47.512365 kubelet[2113]: I0209 19:03:47.512349 2113 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:47.513453 kubelet[2113]: E0209 19:03:47.513432 2113 eviction_manager.go:262] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3510.3.2-a-54659eee1f\" not found" Feb 9 19:03:47.533736 kubelet[2113]: I0209 19:03:47.533707 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.534077 kubelet[2113]: E0209 19:03:47.534053 2113 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.588449 kubelet[2113]: I0209 19:03:47.588382 2113 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:47.590354 kubelet[2113]: I0209 19:03:47.590325 2113 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:47.591967 kubelet[2113]: I0209 19:03:47.591942 2113 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:47.598434 systemd[1]: Created slice kubepods-burstable-podcf8aa5faa75ab41f36ec781e5213098c.slice. Feb 9 19:03:47.607735 systemd[1]: Created slice kubepods-burstable-podd62246cf83538c4c9e2cd1b861cba311.slice. Feb 9 19:03:47.617683 systemd[1]: Created slice kubepods-burstable-pod07aa244cef30da7d5e8eea2fe7183d82.slice. Feb 9 19:03:47.631875 kubelet[2113]: I0209 19:03:47.631833 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632020 kubelet[2113]: I0209 19:03:47.631902 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632020 kubelet[2113]: I0209 19:03:47.631931 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632020 kubelet[2113]: I0209 19:03:47.631956 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632020 kubelet[2113]: I0209 19:03:47.631981 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632020 kubelet[2113]: I0209 19:03:47.632007 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632216 kubelet[2113]: I0209 19:03:47.632033 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632216 kubelet[2113]: I0209 19:03:47.632064 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632216 kubelet[2113]: I0209 19:03:47.632094 2113 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07aa244cef30da7d5e8eea2fe7183d82-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-54659eee1f\" (UID: \"07aa244cef30da7d5e8eea2fe7183d82\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.632462 kubelet[2113]: E0209 19:03:47.632436 2113 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-54659eee1f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="400ms" Feb 9 19:03:47.736684 kubelet[2113]: I0209 19:03:47.736644 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.737001 kubelet[2113]: E0209 19:03:47.736981 2113 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:47.907722 env[1356]: time="2024-02-09T19:03:47.907669746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-54659eee1f,Uid:cf8aa5faa75ab41f36ec781e5213098c,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:47.911318 env[1356]: time="2024-02-09T19:03:47.911280429Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-54659eee1f,Uid:d62246cf83538c4c9e2cd1b861cba311,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:47.922728 env[1356]: time="2024-02-09T19:03:47.922696292Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-54659eee1f,Uid:07aa244cef30da7d5e8eea2fe7183d82,Namespace:kube-system,Attempt:0,}" Feb 9 19:03:48.033922 kubelet[2113]: E0209 19:03:48.033802 2113 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-54659eee1f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="800ms" Feb 9 19:03:48.138986 kubelet[2113]: I0209 19:03:48.138949 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:48.139308 kubelet[2113]: E0209 19:03:48.139287 2113 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:48.461033 kubelet[2113]: W0209 19:03:48.460968 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.461033 kubelet[2113]: E0209 19:03:48.461040 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.4:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.514957 kubelet[2113]: W0209 19:03:48.514885 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.515150 kubelet[2113]: E0209 19:03:48.514999 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.4:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.550273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4007078013.mount: Deactivated successfully. Feb 9 19:03:48.575364 env[1356]: time="2024-02-09T19:03:48.575304961Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.578635 env[1356]: time="2024-02-09T19:03:48.578588335Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.589507 env[1356]: time="2024-02-09T19:03:48.589472479Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.594675 env[1356]: time="2024-02-09T19:03:48.594638695Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.597746 env[1356]: time="2024-02-09T19:03:48.597709963Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.602851 env[1356]: time="2024-02-09T19:03:48.602814378Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.609442 env[1356]: time="2024-02-09T19:03:48.609397125Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.613118 env[1356]: time="2024-02-09T19:03:48.613082908Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.618823 env[1356]: time="2024-02-09T19:03:48.618745535Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.625237 env[1356]: time="2024-02-09T19:03:48.625203779Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.634504 env[1356]: time="2024-02-09T19:03:48.634466187Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.665884 env[1356]: time="2024-02-09T19:03:48.662386512Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:48.665884 env[1356]: time="2024-02-09T19:03:48.662417113Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:48.665884 env[1356]: time="2024-02-09T19:03:48.662427013Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:48.665884 env[1356]: time="2024-02-09T19:03:48.662594617Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd pid=2153 runtime=io.containerd.runc.v2 Feb 9 19:03:48.666179 env[1356]: time="2024-02-09T19:03:48.665956992Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:03:48.688712 systemd[1]: Started cri-containerd-3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd.scope. Feb 9 19:03:48.725086 kubelet[2113]: W0209 19:03:48.724927 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-54659eee1f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.725086 kubelet[2113]: E0209 19:03:48.724997 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.4:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3510.3.2-a-54659eee1f&limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.726706 env[1356]: time="2024-02-09T19:03:48.708390443Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:48.726706 env[1356]: time="2024-02-09T19:03:48.708430244Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:48.726706 env[1356]: time="2024-02-09T19:03:48.708445244Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:48.726706 env[1356]: time="2024-02-09T19:03:48.708590348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/60fead98e00255bcb0b259a73fef802fa755767165776cca8208d9d8289e4da2 pid=2185 runtime=io.containerd.runc.v2 Feb 9 19:03:48.728844 env[1356]: time="2024-02-09T19:03:48.721838844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:03:48.728844 env[1356]: time="2024-02-09T19:03:48.721874045Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:03:48.728844 env[1356]: time="2024-02-09T19:03:48.721887246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:03:48.728844 env[1356]: time="2024-02-09T19:03:48.721994548Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/ed0067c9974cfe0c020aa66212321e0ae25e601f6d6b846255cf2ad0b0c1f4c0 pid=2203 runtime=io.containerd.runc.v2 Feb 9 19:03:48.749823 systemd[1]: Started cri-containerd-ed0067c9974cfe0c020aa66212321e0ae25e601f6d6b846255cf2ad0b0c1f4c0.scope. Feb 9 19:03:48.751689 kubelet[2113]: W0209 19:03:48.751336 2113 reflector.go:533] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.751689 kubelet[2113]: E0209 19:03:48.751420 2113 reflector.go:148] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.4:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.4:6443: connect: connection refused Feb 9 19:03:48.761179 env[1356]: time="2024-02-09T19:03:48.760779217Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3510.3.2-a-54659eee1f,Uid:cf8aa5faa75ab41f36ec781e5213098c,Namespace:kube-system,Attempt:0,} returns sandbox id \"3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd\"" Feb 9 19:03:48.762949 systemd[1]: Started cri-containerd-60fead98e00255bcb0b259a73fef802fa755767165776cca8208d9d8289e4da2.scope. Feb 9 19:03:48.771136 env[1356]: time="2024-02-09T19:03:48.771095348Z" level=info msg="CreateContainer within sandbox \"3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Feb 9 19:03:48.815018 env[1356]: time="2024-02-09T19:03:48.814972931Z" level=info msg="CreateContainer within sandbox \"3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"b94eb39ee8aef2c2207110981ae79717089b7693d3026d1738205953a7e20535\"" Feb 9 19:03:48.815875 env[1356]: time="2024-02-09T19:03:48.815843751Z" level=info msg="StartContainer for \"b94eb39ee8aef2c2207110981ae79717089b7693d3026d1738205953a7e20535\"" Feb 9 19:03:48.824373 env[1356]: time="2024-02-09T19:03:48.824342241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3510.3.2-a-54659eee1f,Uid:07aa244cef30da7d5e8eea2fe7183d82,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed0067c9974cfe0c020aa66212321e0ae25e601f6d6b846255cf2ad0b0c1f4c0\"" Feb 9 19:03:48.826868 env[1356]: time="2024-02-09T19:03:48.826828697Z" level=info msg="CreateContainer within sandbox \"ed0067c9974cfe0c020aa66212321e0ae25e601f6d6b846255cf2ad0b0c1f4c0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Feb 9 19:03:48.834925 kubelet[2113]: E0209 19:03:48.834870 2113 controller.go:146] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.4:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3510.3.2-a-54659eee1f?timeout=10s\": dial tcp 10.200.8.4:6443: connect: connection refused" interval="1.6s" Feb 9 19:03:48.843661 env[1356]: time="2024-02-09T19:03:48.843616873Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3510.3.2-a-54659eee1f,Uid:d62246cf83538c4c9e2cd1b861cba311,Namespace:kube-system,Attempt:0,} returns sandbox id \"60fead98e00255bcb0b259a73fef802fa755767165776cca8208d9d8289e4da2\"" Feb 9 19:03:48.847677 env[1356]: time="2024-02-09T19:03:48.847638163Z" level=info msg="CreateContainer within sandbox \"60fead98e00255bcb0b259a73fef802fa755767165776cca8208d9d8289e4da2\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Feb 9 19:03:48.859467 systemd[1]: Started cri-containerd-b94eb39ee8aef2c2207110981ae79717089b7693d3026d1738205953a7e20535.scope. Feb 9 19:03:48.870302 env[1356]: time="2024-02-09T19:03:48.870259370Z" level=info msg="CreateContainer within sandbox \"ed0067c9974cfe0c020aa66212321e0ae25e601f6d6b846255cf2ad0b0c1f4c0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"432982fc31b7eecd157cb6bb64019cbc10d2d3dae1711ea9748f4695b27c3362\"" Feb 9 19:03:48.876392 env[1356]: time="2024-02-09T19:03:48.876358807Z" level=info msg="StartContainer for \"432982fc31b7eecd157cb6bb64019cbc10d2d3dae1711ea9748f4695b27c3362\"" Feb 9 19:03:48.899128 systemd[1]: Started cri-containerd-432982fc31b7eecd157cb6bb64019cbc10d2d3dae1711ea9748f4695b27c3362.scope. Feb 9 19:03:48.908344 env[1356]: time="2024-02-09T19:03:48.908296022Z" level=info msg="CreateContainer within sandbox \"60fead98e00255bcb0b259a73fef802fa755767165776cca8208d9d8289e4da2\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"9f593bd1b1cbdf55e41995df2469d4bac991819dbfcf8783e5462fde48f429ab\"" Feb 9 19:03:48.910907 env[1356]: time="2024-02-09T19:03:48.910869580Z" level=info msg="StartContainer for \"9f593bd1b1cbdf55e41995df2469d4bac991819dbfcf8783e5462fde48f429ab\"" Feb 9 19:03:48.936584 env[1356]: time="2024-02-09T19:03:48.934762315Z" level=info msg="StartContainer for \"b94eb39ee8aef2c2207110981ae79717089b7693d3026d1738205953a7e20535\" returns successfully" Feb 9 19:03:48.942575 kubelet[2113]: I0209 19:03:48.941265 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:48.942575 kubelet[2113]: E0209 19:03:48.941627 2113 kubelet_node_status.go:92] "Unable to register node with API server" err="Post \"https://10.200.8.4:6443/api/v1/nodes\": dial tcp 10.200.8.4:6443: connect: connection refused" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:48.947225 systemd[1]: Started cri-containerd-9f593bd1b1cbdf55e41995df2469d4bac991819dbfcf8783e5462fde48f429ab.scope. Feb 9 19:03:49.019859 env[1356]: time="2024-02-09T19:03:49.019735808Z" level=info msg="StartContainer for \"432982fc31b7eecd157cb6bb64019cbc10d2d3dae1711ea9748f4695b27c3362\" returns successfully" Feb 9 19:03:49.031174 env[1356]: time="2024-02-09T19:03:49.031121256Z" level=info msg="StartContainer for \"9f593bd1b1cbdf55e41995df2469d4bac991819dbfcf8783e5462fde48f429ab\" returns successfully" Feb 9 19:03:49.545690 systemd[1]: run-containerd-runc-k8s.io-3cc8c92264936fd4de41930b113cb5739553aed7485c8adb5c91ef2a7649d7bd-runc.ZdEf8A.mount: Deactivated successfully. Feb 9 19:03:50.544078 kubelet[2113]: I0209 19:03:50.544051 2113 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:51.717543 kubelet[2113]: E0209 19:03:51.717506 2113 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-3510.3.2-a-54659eee1f\" not found" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:51.727993 kubelet[2113]: I0209 19:03:51.727967 2113 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:52.422610 kubelet[2113]: I0209 19:03:52.422541 2113 apiserver.go:52] "Watching apiserver" Feb 9 19:03:52.431560 kubelet[2113]: I0209 19:03:52.431525 2113 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:03:52.472720 kubelet[2113]: I0209 19:03:52.472677 2113 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:54.074809 systemd[1]: Reloading. Feb 9 19:03:54.167001 /usr/lib/systemd/system-generators/torcx-generator[2412]: time="2024-02-09T19:03:54Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:03:54.167042 /usr/lib/systemd/system-generators/torcx-generator[2412]: time="2024-02-09T19:03:54Z" level=info msg="torcx already run" Feb 9 19:03:54.256523 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:03:54.256551 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:03:54.274638 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:03:54.379730 systemd[1]: Stopping kubelet.service... Feb 9 19:03:54.380444 kubelet[2113]: I0209 19:03:54.380412 2113 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:54.396906 systemd[1]: kubelet.service: Deactivated successfully. Feb 9 19:03:54.397131 systemd[1]: Stopped kubelet.service. Feb 9 19:03:54.399188 systemd[1]: Started kubelet.service. Feb 9 19:03:54.480900 kubelet[2472]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:54.481227 kubelet[2472]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:03:54.481271 kubelet[2472]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:03:54.481392 kubelet[2472]: I0209 19:03:54.481359 2472 server.go:199] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:03:54.485518 kubelet[2472]: I0209 19:03:54.485494 2472 server.go:415] "Kubelet version" kubeletVersion="v1.27.2" Feb 9 19:03:54.485727 kubelet[2472]: I0209 19:03:54.485714 2472 server.go:417] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:03:54.485973 kubelet[2472]: I0209 19:03:54.485962 2472 server.go:837] "Client rotation is on, will bootstrap in background" Feb 9 19:03:54.487479 kubelet[2472]: I0209 19:03:54.487461 2472 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Feb 9 19:03:54.489691 kubelet[2472]: I0209 19:03:54.489673 2472 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:03:54.492221 kubelet[2472]: I0209 19:03:54.492194 2472 server.go:662] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:03:54.492419 kubelet[2472]: I0209 19:03:54.492402 2472 container_manager_linux.go:266] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:03:54.492488 kubelet[2472]: I0209 19:03:54.492480 2472 container_manager_linux.go:271] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:systemd KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:} {Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] TopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] PodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms TopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:03:54.492631 kubelet[2472]: I0209 19:03:54.492503 2472 topology_manager.go:136] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:03:54.492631 kubelet[2472]: I0209 19:03:54.492517 2472 container_manager_linux.go:302] "Creating device plugin manager" Feb 9 19:03:54.492631 kubelet[2472]: I0209 19:03:54.492567 2472 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:54.496080 kubelet[2472]: I0209 19:03:54.496066 2472 kubelet.go:405] "Attempting to sync node with API server" Feb 9 19:03:54.496196 kubelet[2472]: I0209 19:03:54.496180 2472 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:03:54.496295 kubelet[2472]: I0209 19:03:54.496283 2472 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:03:54.496381 kubelet[2472]: I0209 19:03:54.496367 2472 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:03:54.505684 kubelet[2472]: I0209 19:03:54.505667 2472 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:03:54.506263 kubelet[2472]: I0209 19:03:54.506248 2472 server.go:1168] "Started kubelet" Feb 9 19:03:54.522662 sudo[2485]: root : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/tar -xf /opt/bin/cilium.tar.gz -C /opt/bin Feb 9 19:03:54.522918 sudo[2485]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=0) Feb 9 19:03:54.527112 kubelet[2472]: I0209 19:03:54.527096 2472 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:03:54.527714 kubelet[2472]: I0209 19:03:54.527695 2472 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:03:54.528581 kubelet[2472]: I0209 19:03:54.528540 2472 server.go:461] "Adding debug handlers to kubelet server" Feb 9 19:03:54.531026 kubelet[2472]: I0209 19:03:54.530870 2472 volume_manager.go:284] "Starting Kubelet Volume Manager" Feb 9 19:03:54.532056 kubelet[2472]: I0209 19:03:54.531499 2472 desired_state_of_world_populator.go:145] "Desired state populator starts to run" Feb 9 19:03:54.544094 kubelet[2472]: I0209 19:03:54.542762 2472 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:03:54.554403 kubelet[2472]: E0209 19:03:54.554370 2472 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:03:54.554403 kubelet[2472]: E0209 19:03:54.554404 2472 kubelet.go:1400] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:03:54.595859 kubelet[2472]: I0209 19:03:54.595831 2472 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:03:54.598934 kubelet[2472]: I0209 19:03:54.598915 2472 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:03:54.599101 kubelet[2472]: I0209 19:03:54.599090 2472 status_manager.go:207] "Starting to sync pod status with apiserver" Feb 9 19:03:54.599187 kubelet[2472]: I0209 19:03:54.599177 2472 kubelet.go:2257] "Starting kubelet main sync loop" Feb 9 19:03:54.599298 kubelet[2472]: E0209 19:03:54.599289 2472 kubelet.go:2281] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 9 19:03:54.635976 kubelet[2472]: I0209 19:03:54.634294 2472 kubelet_node_status.go:70] "Attempting to register node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.649785 kubelet[2472]: I0209 19:03:54.649760 2472 kubelet_node_status.go:108] "Node was previously registered" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.650021 kubelet[2472]: I0209 19:03:54.650002 2472 kubelet_node_status.go:73] "Successfully registered node" node="ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.671447 kubelet[2472]: I0209 19:03:54.671409 2472 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:03:54.672612 kubelet[2472]: I0209 19:03:54.672589 2472 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:03:54.672751 kubelet[2472]: I0209 19:03:54.672741 2472 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:03:54.673018 kubelet[2472]: I0209 19:03:54.673005 2472 state_mem.go:88] "Updated default CPUSet" cpuSet="" Feb 9 19:03:54.673123 kubelet[2472]: I0209 19:03:54.673116 2472 state_mem.go:96] "Updated CPUSet assignments" assignments=map[] Feb 9 19:03:54.673183 kubelet[2472]: I0209 19:03:54.673177 2472 policy_none.go:49] "None policy: Start" Feb 9 19:03:54.676581 kubelet[2472]: I0209 19:03:54.676542 2472 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:03:54.676704 kubelet[2472]: I0209 19:03:54.676696 2472 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:03:54.676931 kubelet[2472]: I0209 19:03:54.676920 2472 state_mem.go:75] "Updated machine memory state" Feb 9 19:03:54.686491 kubelet[2472]: I0209 19:03:54.686473 2472 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:03:54.697263 kubelet[2472]: I0209 19:03:54.697193 2472 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:03:54.699910 kubelet[2472]: I0209 19:03:54.699881 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:54.707095 kubelet[2472]: I0209 19:03:54.707071 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:54.710115 kubelet[2472]: I0209 19:03:54.710092 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:03:54.713806 kubelet[2472]: W0209 19:03:54.710922 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:03:54.723212 kubelet[2472]: W0209 19:03:54.723183 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:03:54.723600 kubelet[2472]: W0209 19:03:54.723586 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:03:54.833008 kubelet[2472]: I0209 19:03:54.832970 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-ca-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833253 kubelet[2472]: I0209 19:03:54.833226 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-flexvolume-dir\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833332 kubelet[2472]: I0209 19:03:54.833262 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-k8s-certs\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833332 kubelet[2472]: I0209 19:03:54.833292 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/07aa244cef30da7d5e8eea2fe7183d82-kubeconfig\") pod \"kube-scheduler-ci-3510.3.2-a-54659eee1f\" (UID: \"07aa244cef30da7d5e8eea2fe7183d82\") " pod="kube-system/kube-scheduler-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833332 kubelet[2472]: I0209 19:03:54.833322 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-k8s-certs\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833462 kubelet[2472]: I0209 19:03:54.833361 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833462 kubelet[2472]: I0209 19:03:54.833393 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-kubeconfig\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833462 kubelet[2472]: I0209 19:03:54.833434 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d62246cf83538c4c9e2cd1b861cba311-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" (UID: \"d62246cf83538c4c9e2cd1b861cba311\") " pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:54.833748 kubelet[2472]: I0209 19:03:54.833470 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf8aa5faa75ab41f36ec781e5213098c-ca-certs\") pod \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" (UID: \"cf8aa5faa75ab41f36ec781e5213098c\") " pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:55.118338 sudo[2485]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:55.497921 kubelet[2472]: I0209 19:03:55.497877 2472 apiserver.go:52] "Watching apiserver" Feb 9 19:03:55.533080 kubelet[2472]: I0209 19:03:55.533048 2472 desired_state_of_world_populator.go:153] "Finished populating initial desired state of world" Feb 9 19:03:55.539279 kubelet[2472]: I0209 19:03:55.539256 2472 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:03:55.653571 kubelet[2472]: W0209 19:03:55.653527 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:03:55.653767 kubelet[2472]: E0209 19:03:55.653622 2472 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-ci-3510.3.2-a-54659eee1f\" already exists" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:55.656732 kubelet[2472]: W0209 19:03:55.656703 2472 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Feb 9 19:03:55.656865 kubelet[2472]: E0209 19:03:55.656778 2472 kubelet.go:1856] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-3510.3.2-a-54659eee1f\" already exists" pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" Feb 9 19:03:55.757422 kubelet[2472]: I0209 19:03:55.757301 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3510.3.2-a-54659eee1f" podStartSLOduration=1.757235731 podCreationTimestamp="2024-02-09 19:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:55.755231994 +0000 UTC m=+1.349888542" watchObservedRunningTime="2024-02-09 19:03:55.757235731 +0000 UTC m=+1.351892379" Feb 9 19:03:55.809975 kubelet[2472]: I0209 19:03:55.809933 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3510.3.2-a-54659eee1f" podStartSLOduration=1.809871212 podCreationTimestamp="2024-02-09 19:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:55.779889653 +0000 UTC m=+1.374546301" watchObservedRunningTime="2024-02-09 19:03:55.809871212 +0000 UTC m=+1.404527860" Feb 9 19:03:56.655709 sudo[1683]: pam_unix(sudo:session): session closed for user root Feb 9 19:03:56.752957 sshd[1603]: pam_unix(sshd:session): session closed for user core Feb 9 19:03:56.756543 systemd[1]: sshd@4-10.200.8.4:22-10.200.12.6:35398.service: Deactivated successfully. Feb 9 19:03:56.757395 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:03:56.757605 systemd[1]: session-7.scope: Consumed 4.067s CPU time. Feb 9 19:03:56.758157 systemd-logind[1334]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:03:56.758957 systemd-logind[1334]: Removed session 7. Feb 9 19:04:03.317667 kubelet[2472]: I0209 19:04:03.317569 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3510.3.2-a-54659eee1f" podStartSLOduration=9.317516845 podCreationTimestamp="2024-02-09 19:03:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:03:55.810664927 +0000 UTC m=+1.405321475" watchObservedRunningTime="2024-02-09 19:04:03.317516845 +0000 UTC m=+8.912173493" Feb 9 19:04:09.980600 kubelet[2472]: I0209 19:04:09.980562 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:09.986000 kubelet[2472]: I0209 19:04:09.985414 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:09.988154 systemd[1]: Created slice kubepods-besteffort-pod157a6499_b703_482e_8399_e600614b8cef.slice. Feb 9 19:04:09.999108 systemd[1]: Created slice kubepods-burstable-podf5119a16_45fd_41e9_abc9_5a69ccc9dcea.slice. Feb 9 19:04:10.011995 kubelet[2472]: I0209 19:04:10.011951 2472 kuberuntime_manager.go:1460] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Feb 9 19:04:10.012458 env[1356]: time="2024-02-09T19:04:10.012408052Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:04:10.012839 kubelet[2472]: I0209 19:04:10.012704 2472 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Feb 9 19:04:10.118144 kubelet[2472]: I0209 19:04:10.118097 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:10.124310 systemd[1]: Created slice kubepods-besteffort-pod4b86996d_34d0_490b_a81c_cbce9843b45a.slice. Feb 9 19:04:10.124697 kubelet[2472]: I0209 19:04:10.124397 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cni-path\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.124697 kubelet[2472]: I0209 19:04:10.124456 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-cgroup\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.124697 kubelet[2472]: I0209 19:04:10.124527 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-net\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.124697 kubelet[2472]: I0209 19:04:10.124594 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-run\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.124697 kubelet[2472]: I0209 19:04:10.124623 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-bpf-maps\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.124943 kubelet[2472]: I0209 19:04:10.124852 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hostproc\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.125007 kubelet[2472]: I0209 19:04:10.124948 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-etc-cni-netd\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.125060 kubelet[2472]: I0209 19:04:10.125054 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-config-path\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.125107 kubelet[2472]: I0209 19:04:10.125084 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hubble-tls\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.125827 kubelet[2472]: I0209 19:04:10.125164 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-957zj\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-kube-api-access-957zj\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.125827 kubelet[2472]: I0209 19:04:10.125256 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/157a6499-b703-482e-8399-e600614b8cef-lib-modules\") pod \"kube-proxy-9rtcq\" (UID: \"157a6499-b703-482e-8399-e600614b8cef\") " pod="kube-system/kube-proxy-9rtcq" Feb 9 19:04:10.125827 kubelet[2472]: I0209 19:04:10.125362 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jwrq\" (UniqueName: \"kubernetes.io/projected/157a6499-b703-482e-8399-e600614b8cef-kube-api-access-2jwrq\") pod \"kube-proxy-9rtcq\" (UID: \"157a6499-b703-482e-8399-e600614b8cef\") " pod="kube-system/kube-proxy-9rtcq" Feb 9 19:04:10.125827 kubelet[2472]: I0209 19:04:10.125399 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/157a6499-b703-482e-8399-e600614b8cef-kube-proxy\") pod \"kube-proxy-9rtcq\" (UID: \"157a6499-b703-482e-8399-e600614b8cef\") " pod="kube-system/kube-proxy-9rtcq" Feb 9 19:04:10.125827 kubelet[2472]: I0209 19:04:10.125607 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-lib-modules\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.126097 kubelet[2472]: I0209 19:04:10.126025 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/157a6499-b703-482e-8399-e600614b8cef-xtables-lock\") pod \"kube-proxy-9rtcq\" (UID: \"157a6499-b703-482e-8399-e600614b8cef\") " pod="kube-system/kube-proxy-9rtcq" Feb 9 19:04:10.126189 kubelet[2472]: I0209 19:04:10.126172 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-kernel\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.126287 kubelet[2472]: I0209 19:04:10.126215 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-clustermesh-secrets\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.126287 kubelet[2472]: I0209 19:04:10.126283 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-xtables-lock\") pod \"cilium-x77gr\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " pod="kube-system/cilium-x77gr" Feb 9 19:04:10.227553 kubelet[2472]: I0209 19:04:10.227513 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl9j4\" (UniqueName: \"kubernetes.io/projected/4b86996d-34d0-490b-a81c-cbce9843b45a-kube-api-access-nl9j4\") pod \"cilium-operator-574c4bb98d-kghq2\" (UID: \"4b86996d-34d0-490b-a81c-cbce9843b45a\") " pod="kube-system/cilium-operator-574c4bb98d-kghq2" Feb 9 19:04:10.227880 kubelet[2472]: I0209 19:04:10.227861 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b86996d-34d0-490b-a81c-cbce9843b45a-cilium-config-path\") pod \"cilium-operator-574c4bb98d-kghq2\" (UID: \"4b86996d-34d0-490b-a81c-cbce9843b45a\") " pod="kube-system/cilium-operator-574c4bb98d-kghq2" Feb 9 19:04:10.296805 env[1356]: time="2024-02-09T19:04:10.296683215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rtcq,Uid:157a6499-b703-482e-8399-e600614b8cef,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:10.303296 env[1356]: time="2024-02-09T19:04:10.303257900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x77gr,Uid:f5119a16-45fd-41e9-abc9-5a69ccc9dcea,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:10.350271 env[1356]: time="2024-02-09T19:04:10.350186805Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:10.351044 env[1356]: time="2024-02-09T19:04:10.350669311Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:10.351171 env[1356]: time="2024-02-09T19:04:10.351054916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:10.351299 env[1356]: time="2024-02-09T19:04:10.351244718Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/bcf194d95e63e8361871496292ef5229021ff1cd8b1ef6bfed20e72a3b18ac77 pid=2563 runtime=io.containerd.runc.v2 Feb 9 19:04:10.356494 env[1356]: time="2024-02-09T19:04:10.356416285Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:10.356494 env[1356]: time="2024-02-09T19:04:10.356457086Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:10.356727 env[1356]: time="2024-02-09T19:04:10.356481486Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:10.357262 env[1356]: time="2024-02-09T19:04:10.357217895Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc pid=2562 runtime=io.containerd.runc.v2 Feb 9 19:04:10.367797 systemd[1]: Started cri-containerd-bcf194d95e63e8361871496292ef5229021ff1cd8b1ef6bfed20e72a3b18ac77.scope. Feb 9 19:04:10.381754 systemd[1]: Started cri-containerd-e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc.scope. Feb 9 19:04:10.420303 env[1356]: time="2024-02-09T19:04:10.418843089Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9rtcq,Uid:157a6499-b703-482e-8399-e600614b8cef,Namespace:kube-system,Attempt:0,} returns sandbox id \"bcf194d95e63e8361871496292ef5229021ff1cd8b1ef6bfed20e72a3b18ac77\"" Feb 9 19:04:10.423391 env[1356]: time="2024-02-09T19:04:10.423351147Z" level=info msg="CreateContainer within sandbox \"bcf194d95e63e8361871496292ef5229021ff1cd8b1ef6bfed20e72a3b18ac77\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:04:10.424922 env[1356]: time="2024-02-09T19:04:10.424879767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x77gr,Uid:f5119a16-45fd-41e9-abc9-5a69ccc9dcea,Namespace:kube-system,Attempt:0,} returns sandbox id \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\"" Feb 9 19:04:10.427655 env[1356]: time="2024-02-09T19:04:10.427608302Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:04:10.429327 env[1356]: time="2024-02-09T19:04:10.429283324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-kghq2,Uid:4b86996d-34d0-490b-a81c-cbce9843b45a,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:10.479568 env[1356]: time="2024-02-09T19:04:10.479502571Z" level=info msg="CreateContainer within sandbox \"bcf194d95e63e8361871496292ef5229021ff1cd8b1ef6bfed20e72a3b18ac77\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1938f8702d88bb05f7156e7ed8d446b916379be7ee53657ee257e6461896899\"" Feb 9 19:04:10.480830 env[1356]: time="2024-02-09T19:04:10.480798388Z" level=info msg="StartContainer for \"b1938f8702d88bb05f7156e7ed8d446b916379be7ee53657ee257e6461896899\"" Feb 9 19:04:10.485013 env[1356]: time="2024-02-09T19:04:10.484962941Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:10.485358 env[1356]: time="2024-02-09T19:04:10.485335146Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:10.485470 env[1356]: time="2024-02-09T19:04:10.485448248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:10.485797 env[1356]: time="2024-02-09T19:04:10.485747751Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e pid=2640 runtime=io.containerd.runc.v2 Feb 9 19:04:10.505039 systemd[1]: Started cri-containerd-b1938f8702d88bb05f7156e7ed8d446b916379be7ee53657ee257e6461896899.scope. Feb 9 19:04:10.510884 systemd[1]: Started cri-containerd-3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e.scope. Feb 9 19:04:10.586575 env[1356]: time="2024-02-09T19:04:10.586443149Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-574c4bb98d-kghq2,Uid:4b86996d-34d0-490b-a81c-cbce9843b45a,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\"" Feb 9 19:04:10.599049 env[1356]: time="2024-02-09T19:04:10.598996311Z" level=info msg="StartContainer for \"b1938f8702d88bb05f7156e7ed8d446b916379be7ee53657ee257e6461896899\" returns successfully" Feb 9 19:04:14.617286 kubelet[2472]: I0209 19:04:14.617258 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-9rtcq" podStartSLOduration=5.6172161670000005 podCreationTimestamp="2024-02-09 19:04:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:10.675474596 +0000 UTC m=+16.270131144" watchObservedRunningTime="2024-02-09 19:04:14.617216167 +0000 UTC m=+20.211872815" Feb 9 19:04:16.054200 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1900175797.mount: Deactivated successfully. Feb 9 19:04:18.701237 env[1356]: time="2024-02-09T19:04:18.701181086Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:18.708257 env[1356]: time="2024-02-09T19:04:18.708197761Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:18.714260 env[1356]: time="2024-02-09T19:04:18.714208026Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:18.714949 env[1356]: time="2024-02-09T19:04:18.714912334Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:04:18.716485 env[1356]: time="2024-02-09T19:04:18.716447150Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:04:18.719118 env[1356]: time="2024-02-09T19:04:18.719087079Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:04:18.753138 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1389036059.mount: Deactivated successfully. Feb 9 19:04:18.762752 env[1356]: time="2024-02-09T19:04:18.762709849Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\"" Feb 9 19:04:18.765290 env[1356]: time="2024-02-09T19:04:18.763148553Z" level=info msg="StartContainer for \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\"" Feb 9 19:04:18.786315 systemd[1]: Started cri-containerd-bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41.scope. Feb 9 19:04:18.813569 env[1356]: time="2024-02-09T19:04:18.812808288Z" level=info msg="StartContainer for \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\" returns successfully" Feb 9 19:04:18.819625 systemd[1]: cri-containerd-bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41.scope: Deactivated successfully. Feb 9 19:04:19.747755 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41-rootfs.mount: Deactivated successfully. Feb 9 19:04:22.527434 env[1356]: time="2024-02-09T19:04:22.527349904Z" level=info msg="shim disconnected" id=bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41 Feb 9 19:04:22.528007 env[1356]: time="2024-02-09T19:04:22.527427505Z" level=warning msg="cleaning up after shim disconnected" id=bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41 namespace=k8s.io Feb 9 19:04:22.528007 env[1356]: time="2024-02-09T19:04:22.527456605Z" level=info msg="cleaning up dead shim" Feb 9 19:04:22.536384 env[1356]: time="2024-02-09T19:04:22.536341493Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2882 runtime=io.containerd.runc.v2\n" Feb 9 19:04:22.693570 env[1356]: time="2024-02-09T19:04:22.693498249Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:04:22.739649 env[1356]: time="2024-02-09T19:04:22.739599206Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\"" Feb 9 19:04:22.742202 env[1356]: time="2024-02-09T19:04:22.740270612Z" level=info msg="StartContainer for \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\"" Feb 9 19:04:22.766982 systemd[1]: Started cri-containerd-cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9.scope. Feb 9 19:04:22.807958 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:04:22.808979 env[1356]: time="2024-02-09T19:04:22.808293286Z" level=info msg="StartContainer for \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\" returns successfully" Feb 9 19:04:22.809772 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:04:22.810022 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:04:22.811830 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:22.821699 systemd[1]: cri-containerd-cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9.scope: Deactivated successfully. Feb 9 19:04:22.823188 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:22.850091 env[1356]: time="2024-02-09T19:04:22.850042199Z" level=info msg="shim disconnected" id=cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9 Feb 9 19:04:22.850091 env[1356]: time="2024-02-09T19:04:22.850090700Z" level=warning msg="cleaning up after shim disconnected" id=cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9 namespace=k8s.io Feb 9 19:04:22.850373 env[1356]: time="2024-02-09T19:04:22.850101500Z" level=info msg="cleaning up dead shim" Feb 9 19:04:22.857767 env[1356]: time="2024-02-09T19:04:22.857727775Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2948 runtime=io.containerd.runc.v2\n" Feb 9 19:04:23.715485 env[1356]: time="2024-02-09T19:04:23.715437824Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:04:23.727699 systemd[1]: run-containerd-runc-k8s.io-cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9-runc.MSCHLW.mount: Deactivated successfully. Feb 9 19:04:23.727832 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9-rootfs.mount: Deactivated successfully. Feb 9 19:04:23.763560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1528874872.mount: Deactivated successfully. Feb 9 19:04:23.769973 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3475839319.mount: Deactivated successfully. Feb 9 19:04:23.783529 env[1356]: time="2024-02-09T19:04:23.783477584Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\"" Feb 9 19:04:23.787517 env[1356]: time="2024-02-09T19:04:23.787484423Z" level=info msg="StartContainer for \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\"" Feb 9 19:04:23.822620 systemd[1]: Started cri-containerd-813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491.scope. Feb 9 19:04:23.865237 systemd[1]: cri-containerd-813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491.scope: Deactivated successfully. Feb 9 19:04:23.866821 env[1356]: time="2024-02-09T19:04:23.866769092Z" level=info msg="StartContainer for \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\" returns successfully" Feb 9 19:04:24.293395 env[1356]: time="2024-02-09T19:04:24.293337573Z" level=info msg="shim disconnected" id=813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491 Feb 9 19:04:24.293860 env[1356]: time="2024-02-09T19:04:24.293830578Z" level=warning msg="cleaning up after shim disconnected" id=813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491 namespace=k8s.io Feb 9 19:04:24.293989 env[1356]: time="2024-02-09T19:04:24.293970479Z" level=info msg="cleaning up dead shim" Feb 9 19:04:24.320731 env[1356]: time="2024-02-09T19:04:24.320685133Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3004 runtime=io.containerd.runc.v2\n" Feb 9 19:04:24.410388 env[1356]: time="2024-02-09T19:04:24.410334185Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:24.415409 env[1356]: time="2024-02-09T19:04:24.415359933Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:24.418079 env[1356]: time="2024-02-09T19:04:24.418043959Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:04:24.418477 env[1356]: time="2024-02-09T19:04:24.418446662Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:04:24.421173 env[1356]: time="2024-02-09T19:04:24.421132588Z" level=info msg="CreateContainer within sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:04:24.445104 env[1356]: time="2024-02-09T19:04:24.445063815Z" level=info msg="CreateContainer within sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\"" Feb 9 19:04:24.447664 env[1356]: time="2024-02-09T19:04:24.445616121Z" level=info msg="StartContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\"" Feb 9 19:04:24.465061 systemd[1]: Started cri-containerd-ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519.scope. Feb 9 19:04:24.500208 env[1356]: time="2024-02-09T19:04:24.500147139Z" level=info msg="StartContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" returns successfully" Feb 9 19:04:24.707025 env[1356]: time="2024-02-09T19:04:24.706973405Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:04:24.718597 kubelet[2472]: I0209 19:04:24.718571 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-574c4bb98d-kghq2" podStartSLOduration=0.890652955 podCreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="2024-02-09 19:04:10.590980607 +0000 UTC m=+16.185637155" lastFinishedPulling="2024-02-09 19:04:24.418841466 +0000 UTC m=+30.013498014" observedRunningTime="2024-02-09 19:04:24.717405504 +0000 UTC m=+30.312062052" watchObservedRunningTime="2024-02-09 19:04:24.718513814 +0000 UTC m=+30.313170462" Feb 9 19:04:24.752110 env[1356]: time="2024-02-09T19:04:24.752063033Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\"" Feb 9 19:04:24.756630 env[1356]: time="2024-02-09T19:04:24.754197454Z" level=info msg="StartContainer for \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\"" Feb 9 19:04:24.808229 systemd[1]: Started cri-containerd-e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace.scope. Feb 9 19:04:24.843354 systemd[1]: cri-containerd-e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace.scope: Deactivated successfully. Feb 9 19:04:24.846804 env[1356]: time="2024-02-09T19:04:24.846732733Z" level=info msg="StartContainer for \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\" returns successfully" Feb 9 19:04:24.891200 env[1356]: time="2024-02-09T19:04:24.891146855Z" level=info msg="shim disconnected" id=e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace Feb 9 19:04:24.891576 env[1356]: time="2024-02-09T19:04:24.891540059Z" level=warning msg="cleaning up after shim disconnected" id=e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace namespace=k8s.io Feb 9 19:04:24.891699 env[1356]: time="2024-02-09T19:04:24.891682660Z" level=info msg="cleaning up dead shim" Feb 9 19:04:24.903899 env[1356]: time="2024-02-09T19:04:24.903859776Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:04:24Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3095 runtime=io.containerd.runc.v2\n" Feb 9 19:04:25.710717 env[1356]: time="2024-02-09T19:04:25.710652211Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:04:25.730011 systemd[1]: run-containerd-runc-k8s.io-e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace-runc.0gTLip.mount: Deactivated successfully. Feb 9 19:04:25.730132 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace-rootfs.mount: Deactivated successfully. Feb 9 19:04:25.748758 env[1356]: time="2024-02-09T19:04:25.748706765Z" level=info msg="CreateContainer within sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\"" Feb 9 19:04:25.751561 env[1356]: time="2024-02-09T19:04:25.749234770Z" level=info msg="StartContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\"" Feb 9 19:04:25.777219 systemd[1]: Started cri-containerd-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92.scope. Feb 9 19:04:25.818129 env[1356]: time="2024-02-09T19:04:25.818076012Z" level=info msg="StartContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" returns successfully" Feb 9 19:04:25.992166 kubelet[2472]: I0209 19:04:25.992055 2472 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:04:26.022295 kubelet[2472]: I0209 19:04:26.022248 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:26.028788 systemd[1]: Created slice kubepods-burstable-podc0f2afdd_e965_4190_965f_06887a9fd7d5.slice. Feb 9 19:04:26.031511 kubelet[2472]: W0209 19:04:26.031430 2472 reflector.go:533] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-54659eee1f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-54659eee1f' and this object Feb 9 19:04:26.031511 kubelet[2472]: E0209 19:04:26.031471 2472 reflector.go:148] object-"kube-system"/"coredns": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:ci-3510.3.2-a-54659eee1f" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'ci-3510.3.2-a-54659eee1f' and this object Feb 9 19:04:26.034907 kubelet[2472]: I0209 19:04:26.034540 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:04:26.042657 systemd[1]: Created slice kubepods-burstable-pod318d06e1_1537_40df_be13_f003f5817dff.slice. Feb 9 19:04:26.044024 kubelet[2472]: I0209 19:04:26.043995 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/318d06e1-1537-40df-be13-f003f5817dff-config-volume\") pod \"coredns-5d78c9869d-ftdsc\" (UID: \"318d06e1-1537-40df-be13-f003f5817dff\") " pod="kube-system/coredns-5d78c9869d-ftdsc" Feb 9 19:04:26.044286 kubelet[2472]: I0209 19:04:26.044259 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c0f2afdd-e965-4190-965f-06887a9fd7d5-config-volume\") pod \"coredns-5d78c9869d-s5b2q\" (UID: \"c0f2afdd-e965-4190-965f-06887a9fd7d5\") " pod="kube-system/coredns-5d78c9869d-s5b2q" Feb 9 19:04:26.044445 kubelet[2472]: I0209 19:04:26.044433 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nlxwr\" (UniqueName: \"kubernetes.io/projected/c0f2afdd-e965-4190-965f-06887a9fd7d5-kube-api-access-nlxwr\") pod \"coredns-5d78c9869d-s5b2q\" (UID: \"c0f2afdd-e965-4190-965f-06887a9fd7d5\") " pod="kube-system/coredns-5d78c9869d-s5b2q" Feb 9 19:04:26.145394 kubelet[2472]: I0209 19:04:26.145345 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r8zq5\" (UniqueName: \"kubernetes.io/projected/318d06e1-1537-40df-be13-f003f5817dff-kube-api-access-r8zq5\") pod \"coredns-5d78c9869d-ftdsc\" (UID: \"318d06e1-1537-40df-be13-f003f5817dff\") " pod="kube-system/coredns-5d78c9869d-ftdsc" Feb 9 19:04:26.726054 kubelet[2472]: I0209 19:04:26.726012 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x77gr" podStartSLOduration=9.437471492 podCreationTimestamp="2024-02-09 19:04:09 +0000 UTC" firstStartedPulling="2024-02-09 19:04:10.426932194 +0000 UTC m=+16.021588842" lastFinishedPulling="2024-02-09 19:04:18.715432139 +0000 UTC m=+24.310088787" observedRunningTime="2024-02-09 19:04:26.724837127 +0000 UTC m=+32.319493675" watchObservedRunningTime="2024-02-09 19:04:26.725971437 +0000 UTC m=+32.320628085" Feb 9 19:04:26.733168 systemd[1]: run-containerd-runc-k8s.io-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92-runc.pe5Mrf.mount: Deactivated successfully. Feb 9 19:04:26.955493 env[1356]: time="2024-02-09T19:04:26.955441833Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-ftdsc,Uid:318d06e1-1537-40df-be13-f003f5817dff,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:27.235756 env[1356]: time="2024-02-09T19:04:27.235701152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s5b2q,Uid:c0f2afdd-e965-4190-965f-06887a9fd7d5,Namespace:kube-system,Attempt:0,}" Feb 9 19:04:28.122790 systemd-networkd[1492]: cilium_host: Link UP Feb 9 19:04:28.133424 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:04:28.133598 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:04:28.133729 systemd-networkd[1492]: cilium_net: Link UP Feb 9 19:04:28.133967 systemd-networkd[1492]: cilium_net: Gained carrier Feb 9 19:04:28.134139 systemd-networkd[1492]: cilium_host: Gained carrier Feb 9 19:04:28.280655 systemd-networkd[1492]: cilium_host: Gained IPv6LL Feb 9 19:04:28.308846 systemd-networkd[1492]: cilium_vxlan: Link UP Feb 9 19:04:28.308859 systemd-networkd[1492]: cilium_vxlan: Gained carrier Feb 9 19:04:28.554616 kernel: NET: Registered PF_ALG protocol family Feb 9 19:04:29.120786 systemd-networkd[1492]: cilium_net: Gained IPv6LL Feb 9 19:04:29.216230 systemd-networkd[1492]: lxc_health: Link UP Feb 9 19:04:29.240149 systemd-networkd[1492]: lxc_health: Gained carrier Feb 9 19:04:29.243051 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:04:29.522809 systemd-networkd[1492]: lxcf832c9c06674: Link UP Feb 9 19:04:29.531582 kernel: eth0: renamed from tmpd4a70 Feb 9 19:04:29.542723 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcf832c9c06674: link becomes ready Feb 9 19:04:29.542398 systemd-networkd[1492]: lxcf832c9c06674: Gained carrier Feb 9 19:04:29.696711 systemd-networkd[1492]: cilium_vxlan: Gained IPv6LL Feb 9 19:04:29.786895 systemd-networkd[1492]: lxcbb80ad6db249: Link UP Feb 9 19:04:29.793580 kernel: eth0: renamed from tmpe3528 Feb 9 19:04:29.802596 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcbb80ad6db249: link becomes ready Feb 9 19:04:29.802805 systemd-networkd[1492]: lxcbb80ad6db249: Gained carrier Feb 9 19:04:31.040691 systemd-networkd[1492]: lxcbb80ad6db249: Gained IPv6LL Feb 9 19:04:31.232731 systemd-networkd[1492]: lxc_health: Gained IPv6LL Feb 9 19:04:31.552710 systemd-networkd[1492]: lxcf832c9c06674: Gained IPv6LL Feb 9 19:04:33.415633 env[1356]: time="2024-02-09T19:04:33.415531237Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:33.416322 env[1356]: time="2024-02-09T19:04:33.416274643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:33.416501 env[1356]: time="2024-02-09T19:04:33.416462045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:33.416875 env[1356]: time="2024-02-09T19:04:33.416827347Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee pid=3642 runtime=io.containerd.runc.v2 Feb 9 19:04:33.421268 env[1356]: time="2024-02-09T19:04:33.421198282Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:04:33.421443 env[1356]: time="2024-02-09T19:04:33.421412584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:04:33.421599 env[1356]: time="2024-02-09T19:04:33.421569885Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:04:33.422021 env[1356]: time="2024-02-09T19:04:33.421986589Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/d4a70ac019533fae47f749bb2749e5809a65e0ddb7c9fc7a1320bdb827e24980 pid=3651 runtime=io.containerd.runc.v2 Feb 9 19:04:33.457408 systemd[1]: Started cri-containerd-d4a70ac019533fae47f749bb2749e5809a65e0ddb7c9fc7a1320bdb827e24980.scope. Feb 9 19:04:33.478333 systemd[1]: Started cri-containerd-e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee.scope. Feb 9 19:04:33.563885 env[1356]: time="2024-02-09T19:04:33.563835524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-ftdsc,Uid:318d06e1-1537-40df-be13-f003f5817dff,Namespace:kube-system,Attempt:0,} returns sandbox id \"d4a70ac019533fae47f749bb2749e5809a65e0ddb7c9fc7a1320bdb827e24980\"" Feb 9 19:04:33.569891 env[1356]: time="2024-02-09T19:04:33.569844172Z" level=info msg="CreateContainer within sandbox \"d4a70ac019533fae47f749bb2749e5809a65e0ddb7c9fc7a1320bdb827e24980\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:33.593238 env[1356]: time="2024-02-09T19:04:33.593187359Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-5d78c9869d-s5b2q,Uid:c0f2afdd-e965-4190-965f-06887a9fd7d5,Namespace:kube-system,Attempt:0,} returns sandbox id \"e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee\"" Feb 9 19:04:33.596615 env[1356]: time="2024-02-09T19:04:33.596573986Z" level=info msg="CreateContainer within sandbox \"e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Feb 9 19:04:33.608404 env[1356]: time="2024-02-09T19:04:33.608339580Z" level=info msg="CreateContainer within sandbox \"d4a70ac019533fae47f749bb2749e5809a65e0ddb7c9fc7a1320bdb827e24980\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4935680f05d2ab560a1b97421c833cd046087c169ded5a4cb79a1cd82d1be8ec\"" Feb 9 19:04:33.608935 env[1356]: time="2024-02-09T19:04:33.608892585Z" level=info msg="StartContainer for \"4935680f05d2ab560a1b97421c833cd046087c169ded5a4cb79a1cd82d1be8ec\"" Feb 9 19:04:33.633794 systemd[1]: Started cri-containerd-4935680f05d2ab560a1b97421c833cd046087c169ded5a4cb79a1cd82d1be8ec.scope. Feb 9 19:04:33.643568 env[1356]: time="2024-02-09T19:04:33.643497962Z" level=info msg="CreateContainer within sandbox \"e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"8185c56b6588c1ed67b7c7b8b936aa71b5de4fb933f5cb569bd9066f0bef54e7\"" Feb 9 19:04:33.644436 env[1356]: time="2024-02-09T19:04:33.644393669Z" level=info msg="StartContainer for \"8185c56b6588c1ed67b7c7b8b936aa71b5de4fb933f5cb569bd9066f0bef54e7\"" Feb 9 19:04:33.675908 systemd[1]: Started cri-containerd-8185c56b6588c1ed67b7c7b8b936aa71b5de4fb933f5cb569bd9066f0bef54e7.scope. Feb 9 19:04:33.702542 env[1356]: time="2024-02-09T19:04:33.702490934Z" level=info msg="StartContainer for \"4935680f05d2ab560a1b97421c833cd046087c169ded5a4cb79a1cd82d1be8ec\" returns successfully" Feb 9 19:04:33.763239 env[1356]: time="2024-02-09T19:04:33.763179220Z" level=info msg="StartContainer for \"8185c56b6588c1ed67b7c7b8b936aa71b5de4fb933f5cb569bd9066f0bef54e7\" returns successfully" Feb 9 19:04:33.766871 kubelet[2472]: I0209 19:04:33.766623 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-ftdsc" podStartSLOduration=23.766575447 podCreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:33.765882541 +0000 UTC m=+39.360539089" watchObservedRunningTime="2024-02-09 19:04:33.766575447 +0000 UTC m=+39.361231995" Feb 9 19:04:34.427064 systemd[1]: run-containerd-runc-k8s.io-e35280213018b56095f1ef501021dac506d845068207b0f98b294c68c8e64aee-runc.LJvbtp.mount: Deactivated successfully. Feb 9 19:04:34.747780 kubelet[2472]: I0209 19:04:34.747743 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-5d78c9869d-s5b2q" podStartSLOduration=24.747686094 podCreationTimestamp="2024-02-09 19:04:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:04:34.747354491 +0000 UTC m=+40.342011039" watchObservedRunningTime="2024-02-09 19:04:34.747686094 +0000 UTC m=+40.342342742" Feb 9 19:05:24.288788 systemd[1]: Started sshd@5-10.200.8.4:22-10.200.12.6:48726.service. Feb 9 19:05:24.960115 sshd[3800]: Accepted publickey for core from 10.200.12.6 port 48726 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:24.960728 sshd[3800]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:24.965599 systemd[1]: Started session-8.scope. Feb 9 19:05:24.966122 systemd-logind[1334]: New session 8 of user core. Feb 9 19:05:25.484502 sshd[3800]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:25.487771 systemd[1]: sshd@5-10.200.8.4:22-10.200.12.6:48726.service: Deactivated successfully. Feb 9 19:05:25.488880 systemd[1]: session-8.scope: Deactivated successfully. Feb 9 19:05:25.489580 systemd-logind[1334]: Session 8 logged out. Waiting for processes to exit. Feb 9 19:05:25.490362 systemd-logind[1334]: Removed session 8. Feb 9 19:05:30.629436 systemd[1]: Started sshd@6-10.200.8.4:22-10.200.12.6:44888.service. Feb 9 19:05:31.361994 sshd[3815]: Accepted publickey for core from 10.200.12.6 port 44888 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:31.363486 sshd[3815]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:31.368492 systemd[1]: Started session-9.scope. Feb 9 19:05:31.368972 systemd-logind[1334]: New session 9 of user core. Feb 9 19:05:31.853269 sshd[3815]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:31.856355 systemd[1]: sshd@6-10.200.8.4:22-10.200.12.6:44888.service: Deactivated successfully. Feb 9 19:05:31.857240 systemd[1]: session-9.scope: Deactivated successfully. Feb 9 19:05:31.857925 systemd-logind[1334]: Session 9 logged out. Waiting for processes to exit. Feb 9 19:05:31.858817 systemd-logind[1334]: Removed session 9. Feb 9 19:05:36.959854 systemd[1]: Started sshd@7-10.200.8.4:22-10.200.12.6:44902.service. Feb 9 19:05:37.585012 sshd[3827]: Accepted publickey for core from 10.200.12.6 port 44902 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:37.587036 sshd[3827]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:37.591967 systemd-logind[1334]: New session 10 of user core. Feb 9 19:05:37.592489 systemd[1]: Started session-10.scope. Feb 9 19:05:38.074223 sshd[3827]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:38.077257 systemd[1]: sshd@7-10.200.8.4:22-10.200.12.6:44902.service: Deactivated successfully. Feb 9 19:05:38.078171 systemd[1]: session-10.scope: Deactivated successfully. Feb 9 19:05:38.078940 systemd-logind[1334]: Session 10 logged out. Waiting for processes to exit. Feb 9 19:05:38.079759 systemd-logind[1334]: Removed session 10. Feb 9 19:05:43.180278 systemd[1]: Started sshd@8-10.200.8.4:22-10.200.12.6:57394.service. Feb 9 19:05:43.793191 sshd[3841]: Accepted publickey for core from 10.200.12.6 port 57394 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:43.794611 sshd[3841]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:43.799388 systemd-logind[1334]: New session 11 of user core. Feb 9 19:05:43.799938 systemd[1]: Started session-11.scope. Feb 9 19:05:44.287164 sshd[3841]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:44.290816 systemd[1]: sshd@8-10.200.8.4:22-10.200.12.6:57394.service: Deactivated successfully. Feb 9 19:05:44.291890 systemd[1]: session-11.scope: Deactivated successfully. Feb 9 19:05:44.292801 systemd-logind[1334]: Session 11 logged out. Waiting for processes to exit. Feb 9 19:05:44.293806 systemd-logind[1334]: Removed session 11. Feb 9 19:05:49.393141 systemd[1]: Started sshd@9-10.200.8.4:22-10.200.12.6:58616.service. Feb 9 19:05:50.008162 sshd[3853]: Accepted publickey for core from 10.200.12.6 port 58616 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:50.009859 sshd[3853]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:50.015624 systemd[1]: Started session-12.scope. Feb 9 19:05:50.015654 systemd-logind[1334]: New session 12 of user core. Feb 9 19:05:50.510883 sshd[3853]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:50.514282 systemd[1]: sshd@9-10.200.8.4:22-10.200.12.6:58616.service: Deactivated successfully. Feb 9 19:05:50.515424 systemd[1]: session-12.scope: Deactivated successfully. Feb 9 19:05:50.516294 systemd-logind[1334]: Session 12 logged out. Waiting for processes to exit. Feb 9 19:05:50.517140 systemd-logind[1334]: Removed session 12. Feb 9 19:05:50.618703 systemd[1]: Started sshd@10-10.200.8.4:22-10.200.12.6:58628.service. Feb 9 19:05:51.253585 sshd[3865]: Accepted publickey for core from 10.200.12.6 port 58628 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:51.255306 sshd[3865]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:51.260751 systemd[1]: Started session-13.scope. Feb 9 19:05:51.261412 systemd-logind[1334]: New session 13 of user core. Feb 9 19:05:52.335494 sshd[3865]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:52.338772 systemd[1]: sshd@10-10.200.8.4:22-10.200.12.6:58628.service: Deactivated successfully. Feb 9 19:05:52.339700 systemd[1]: session-13.scope: Deactivated successfully. Feb 9 19:05:52.340386 systemd-logind[1334]: Session 13 logged out. Waiting for processes to exit. Feb 9 19:05:52.341246 systemd-logind[1334]: Removed session 13. Feb 9 19:05:52.440955 systemd[1]: Started sshd@11-10.200.8.4:22-10.200.12.6:58634.service. Feb 9 19:05:53.053366 sshd[3875]: Accepted publickey for core from 10.200.12.6 port 58634 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:53.054793 sshd[3875]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:53.059655 systemd-logind[1334]: New session 14 of user core. Feb 9 19:05:53.059691 systemd[1]: Started session-14.scope. Feb 9 19:05:53.542441 sshd[3875]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:53.545295 systemd[1]: sshd@11-10.200.8.4:22-10.200.12.6:58634.service: Deactivated successfully. Feb 9 19:05:53.546460 systemd[1]: session-14.scope: Deactivated successfully. Feb 9 19:05:53.547201 systemd-logind[1334]: Session 14 logged out. Waiting for processes to exit. Feb 9 19:05:53.548144 systemd-logind[1334]: Removed session 14. Feb 9 19:05:58.650229 systemd[1]: Started sshd@12-10.200.8.4:22-10.200.12.6:58200.service. Feb 9 19:05:59.270699 sshd[3888]: Accepted publickey for core from 10.200.12.6 port 58200 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:59.272347 sshd[3888]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:59.277617 systemd[1]: Started session-15.scope. Feb 9 19:05:59.278204 systemd-logind[1334]: New session 15 of user core. Feb 9 19:05:59.768678 sshd[3888]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:59.771778 systemd[1]: sshd@12-10.200.8.4:22-10.200.12.6:58200.service: Deactivated successfully. Feb 9 19:05:59.772775 systemd[1]: session-15.scope: Deactivated successfully. Feb 9 19:05:59.773502 systemd-logind[1334]: Session 15 logged out. Waiting for processes to exit. Feb 9 19:05:59.774327 systemd-logind[1334]: Removed session 15. Feb 9 19:05:59.872638 systemd[1]: Started sshd@13-10.200.8.4:22-10.200.12.6:58214.service. Feb 9 19:06:00.492181 sshd[3900]: Accepted publickey for core from 10.200.12.6 port 58214 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:00.493892 sshd[3900]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:00.499432 systemd-logind[1334]: New session 16 of user core. Feb 9 19:06:00.500562 systemd[1]: Started session-16.scope. Feb 9 19:06:01.079569 sshd[3900]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:01.082745 systemd[1]: sshd@13-10.200.8.4:22-10.200.12.6:58214.service: Deactivated successfully. Feb 9 19:06:01.083711 systemd[1]: session-16.scope: Deactivated successfully. Feb 9 19:06:01.084391 systemd-logind[1334]: Session 16 logged out. Waiting for processes to exit. Feb 9 19:06:01.085260 systemd-logind[1334]: Removed session 16. Feb 9 19:06:01.185952 systemd[1]: Started sshd@14-10.200.8.4:22-10.200.12.6:58218.service. Feb 9 19:06:01.808369 sshd[3909]: Accepted publickey for core from 10.200.12.6 port 58218 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:01.809903 sshd[3909]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:01.814976 systemd[1]: Started session-17.scope. Feb 9 19:06:01.815431 systemd-logind[1334]: New session 17 of user core. Feb 9 19:06:03.301373 sshd[3909]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:03.304590 systemd[1]: sshd@14-10.200.8.4:22-10.200.12.6:58218.service: Deactivated successfully. Feb 9 19:06:03.305582 systemd[1]: session-17.scope: Deactivated successfully. Feb 9 19:06:03.306352 systemd-logind[1334]: Session 17 logged out. Waiting for processes to exit. Feb 9 19:06:03.307737 systemd-logind[1334]: Removed session 17. Feb 9 19:06:03.406612 systemd[1]: Started sshd@15-10.200.8.4:22-10.200.12.6:58220.service. Feb 9 19:06:04.018947 sshd[3926]: Accepted publickey for core from 10.200.12.6 port 58220 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:04.020986 sshd[3926]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:04.026319 systemd-logind[1334]: New session 18 of user core. Feb 9 19:06:04.027108 systemd[1]: Started session-18.scope. Feb 9 19:06:04.721532 sshd[3926]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:04.725031 systemd[1]: sshd@15-10.200.8.4:22-10.200.12.6:58220.service: Deactivated successfully. Feb 9 19:06:04.726191 systemd[1]: session-18.scope: Deactivated successfully. Feb 9 19:06:04.727001 systemd-logind[1334]: Session 18 logged out. Waiting for processes to exit. Feb 9 19:06:04.728027 systemd-logind[1334]: Removed session 18. Feb 9 19:06:04.827445 systemd[1]: Started sshd@16-10.200.8.4:22-10.200.12.6:58232.service. Feb 9 19:06:05.449591 sshd[3938]: Accepted publickey for core from 10.200.12.6 port 58232 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:05.450989 sshd[3938]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:05.455906 systemd[1]: Started session-19.scope. Feb 9 19:06:05.456325 systemd-logind[1334]: New session 19 of user core. Feb 9 19:06:05.946948 sshd[3938]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:05.949823 systemd[1]: sshd@16-10.200.8.4:22-10.200.12.6:58232.service: Deactivated successfully. Feb 9 19:06:05.950931 systemd[1]: session-19.scope: Deactivated successfully. Feb 9 19:06:05.951750 systemd-logind[1334]: Session 19 logged out. Waiting for processes to exit. Feb 9 19:06:05.952613 systemd-logind[1334]: Removed session 19. Feb 9 19:06:11.054437 systemd[1]: Started sshd@17-10.200.8.4:22-10.200.12.6:50918.service. Feb 9 19:06:11.728841 sshd[3955]: Accepted publickey for core from 10.200.12.6 port 50918 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:11.730452 sshd[3955]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:11.735683 systemd-logind[1334]: New session 20 of user core. Feb 9 19:06:11.736569 systemd[1]: Started session-20.scope. Feb 9 19:06:12.225776 sshd[3955]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:12.229262 systemd[1]: sshd@17-10.200.8.4:22-10.200.12.6:50918.service: Deactivated successfully. Feb 9 19:06:12.230349 systemd[1]: session-20.scope: Deactivated successfully. Feb 9 19:06:12.231324 systemd-logind[1334]: Session 20 logged out. Waiting for processes to exit. Feb 9 19:06:12.232445 systemd-logind[1334]: Removed session 20. Feb 9 19:06:17.332767 systemd[1]: Started sshd@18-10.200.8.4:22-10.200.12.6:54154.service. Feb 9 19:06:17.945243 sshd[3967]: Accepted publickey for core from 10.200.12.6 port 54154 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:17.946842 sshd[3967]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:17.952621 systemd-logind[1334]: New session 21 of user core. Feb 9 19:06:17.953119 systemd[1]: Started session-21.scope. Feb 9 19:06:18.434964 sshd[3967]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:18.438220 systemd[1]: sshd@18-10.200.8.4:22-10.200.12.6:54154.service: Deactivated successfully. Feb 9 19:06:18.439161 systemd[1]: session-21.scope: Deactivated successfully. Feb 9 19:06:18.439843 systemd-logind[1334]: Session 21 logged out. Waiting for processes to exit. Feb 9 19:06:18.440694 systemd-logind[1334]: Removed session 21. Feb 9 19:06:23.541150 systemd[1]: Started sshd@19-10.200.8.4:22-10.200.12.6:54156.service. Feb 9 19:06:24.157390 sshd[3978]: Accepted publickey for core from 10.200.12.6 port 54156 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:24.158805 sshd[3978]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:24.163763 systemd[1]: Started session-22.scope. Feb 9 19:06:24.164351 systemd-logind[1334]: New session 22 of user core. Feb 9 19:06:24.649713 sshd[3978]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:24.652698 systemd[1]: sshd@19-10.200.8.4:22-10.200.12.6:54156.service: Deactivated successfully. Feb 9 19:06:24.653658 systemd[1]: session-22.scope: Deactivated successfully. Feb 9 19:06:24.654620 systemd-logind[1334]: Session 22 logged out. Waiting for processes to exit. Feb 9 19:06:24.655371 systemd-logind[1334]: Removed session 22. Feb 9 19:06:24.764365 systemd[1]: Started sshd@20-10.200.8.4:22-10.200.12.6:54164.service. Feb 9 19:06:25.382441 sshd[3990]: Accepted publickey for core from 10.200.12.6 port 54164 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:25.384068 sshd[3990]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:25.388612 systemd-logind[1334]: New session 23 of user core. Feb 9 19:06:25.389076 systemd[1]: Started session-23.scope. Feb 9 19:06:27.039493 env[1356]: time="2024-02-09T19:06:27.039445066Z" level=info msg="StopContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" with timeout 30 (s)" Feb 9 19:06:27.040605 env[1356]: time="2024-02-09T19:06:27.040542287Z" level=info msg="Stop container \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" with signal terminated" Feb 9 19:06:27.063924 systemd[1]: run-containerd-runc-k8s.io-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92-runc.8kQl8S.mount: Deactivated successfully. Feb 9 19:06:27.064618 systemd[1]: cri-containerd-ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519.scope: Deactivated successfully. Feb 9 19:06:27.092610 env[1356]: time="2024-02-09T19:06:27.092402285Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:06:27.096477 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519-rootfs.mount: Deactivated successfully. Feb 9 19:06:27.102742 env[1356]: time="2024-02-09T19:06:27.102703883Z" level=info msg="StopContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" with timeout 1 (s)" Feb 9 19:06:27.103030 env[1356]: time="2024-02-09T19:06:27.102944488Z" level=info msg="Stop container \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" with signal terminated" Feb 9 19:06:27.111703 systemd-networkd[1492]: lxc_health: Link DOWN Feb 9 19:06:27.111712 systemd-networkd[1492]: lxc_health: Lost carrier Feb 9 19:06:27.135038 systemd[1]: cri-containerd-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92.scope: Deactivated successfully. Feb 9 19:06:27.135334 systemd[1]: cri-containerd-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92.scope: Consumed 7.229s CPU time. Feb 9 19:06:27.147068 env[1356]: time="2024-02-09T19:06:27.147003335Z" level=info msg="shim disconnected" id=ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519 Feb 9 19:06:27.147270 env[1356]: time="2024-02-09T19:06:27.147072837Z" level=warning msg="cleaning up after shim disconnected" id=ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519 namespace=k8s.io Feb 9 19:06:27.147270 env[1356]: time="2024-02-09T19:06:27.147088137Z" level=info msg="cleaning up dead shim" Feb 9 19:06:27.162670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92-rootfs.mount: Deactivated successfully. Feb 9 19:06:27.166253 env[1356]: time="2024-02-09T19:06:27.166207405Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4053 runtime=io.containerd.runc.v2\n" Feb 9 19:06:27.170765 env[1356]: time="2024-02-09T19:06:27.170716592Z" level=info msg="StopContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" returns successfully" Feb 9 19:06:27.171611 env[1356]: time="2024-02-09T19:06:27.171572208Z" level=info msg="StopPodSandbox for \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\"" Feb 9 19:06:27.174455 env[1356]: time="2024-02-09T19:06:27.171648510Z" level=info msg="Container to stop \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.173839 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e-shm.mount: Deactivated successfully. Feb 9 19:06:27.185769 env[1356]: time="2024-02-09T19:06:27.185428075Z" level=info msg="shim disconnected" id=08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92 Feb 9 19:06:27.185769 env[1356]: time="2024-02-09T19:06:27.185495276Z" level=warning msg="cleaning up after shim disconnected" id=08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92 namespace=k8s.io Feb 9 19:06:27.185769 env[1356]: time="2024-02-09T19:06:27.185508676Z" level=info msg="cleaning up dead shim" Feb 9 19:06:27.188275 systemd[1]: cri-containerd-3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e.scope: Deactivated successfully. Feb 9 19:06:27.202802 env[1356]: time="2024-02-09T19:06:27.202754308Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4080 runtime=io.containerd.runc.v2\n" Feb 9 19:06:27.206938 env[1356]: time="2024-02-09T19:06:27.206895088Z" level=info msg="StopContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" returns successfully" Feb 9 19:06:27.207754 env[1356]: time="2024-02-09T19:06:27.207717604Z" level=info msg="StopPodSandbox for \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\"" Feb 9 19:06:27.207874 env[1356]: time="2024-02-09T19:06:27.207782605Z" level=info msg="Container to stop \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.207874 env[1356]: time="2024-02-09T19:06:27.207803905Z" level=info msg="Container to stop \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.207874 env[1356]: time="2024-02-09T19:06:27.207818706Z" level=info msg="Container to stop \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.207874 env[1356]: time="2024-02-09T19:06:27.207833906Z" level=info msg="Container to stop \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.207874 env[1356]: time="2024-02-09T19:06:27.207848606Z" level=info msg="Container to stop \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:27.216843 systemd[1]: cri-containerd-e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc.scope: Deactivated successfully. Feb 9 19:06:27.230062 env[1356]: time="2024-02-09T19:06:27.230000232Z" level=info msg="shim disconnected" id=3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e Feb 9 19:06:27.230062 env[1356]: time="2024-02-09T19:06:27.230065434Z" level=warning msg="cleaning up after shim disconnected" id=3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e namespace=k8s.io Feb 9 19:06:27.230358 env[1356]: time="2024-02-09T19:06:27.230077134Z" level=info msg="cleaning up dead shim" Feb 9 19:06:27.247098 env[1356]: time="2024-02-09T19:06:27.247045860Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4121 runtime=io.containerd.runc.v2\n" Feb 9 19:06:27.247457 env[1356]: time="2024-02-09T19:06:27.247414867Z" level=info msg="TearDown network for sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" successfully" Feb 9 19:06:27.247457 env[1356]: time="2024-02-09T19:06:27.247444168Z" level=info msg="StopPodSandbox for \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" returns successfully" Feb 9 19:06:27.253813 env[1356]: time="2024-02-09T19:06:27.253403583Z" level=info msg="shim disconnected" id=e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc Feb 9 19:06:27.253813 env[1356]: time="2024-02-09T19:06:27.253470584Z" level=warning msg="cleaning up after shim disconnected" id=e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc namespace=k8s.io Feb 9 19:06:27.253813 env[1356]: time="2024-02-09T19:06:27.253483084Z" level=info msg="cleaning up dead shim" Feb 9 19:06:27.267779 env[1356]: time="2024-02-09T19:06:27.267728958Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4141 runtime=io.containerd.runc.v2\n" Feb 9 19:06:27.268085 env[1356]: time="2024-02-09T19:06:27.268057165Z" level=info msg="TearDown network for sandbox \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" successfully" Feb 9 19:06:27.268192 env[1356]: time="2024-02-09T19:06:27.268083665Z" level=info msg="StopPodSandbox for \"e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc\" returns successfully" Feb 9 19:06:27.366987 kubelet[2472]: I0209 19:06:27.366838 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b86996d-34d0-490b-a81c-cbce9843b45a-cilium-config-path\") pod \"4b86996d-34d0-490b-a81c-cbce9843b45a\" (UID: \"4b86996d-34d0-490b-a81c-cbce9843b45a\") " Feb 9 19:06:27.366987 kubelet[2472]: I0209 19:06:27.366905 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-cgroup\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.366987 kubelet[2472]: I0209 19:06:27.366936 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-kernel\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.367702 kubelet[2472]: I0209 19:06:27.367669 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.368511 kubelet[2472]: W0209 19:06:27.368379 2472 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/4b86996d-34d0-490b-a81c-cbce9843b45a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:27.369322 kubelet[2472]: I0209 19:06:27.369287 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-957zj\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-kube-api-access-957zj\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369436 kubelet[2472]: I0209 19:06:27.369369 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl9j4\" (UniqueName: \"kubernetes.io/projected/4b86996d-34d0-490b-a81c-cbce9843b45a-kube-api-access-nl9j4\") pod \"4b86996d-34d0-490b-a81c-cbce9843b45a\" (UID: \"4b86996d-34d0-490b-a81c-cbce9843b45a\") " Feb 9 19:06:27.369436 kubelet[2472]: I0209 19:06:27.369407 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-net\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369575 kubelet[2472]: I0209 19:06:27.369442 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hostproc\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369575 kubelet[2472]: I0209 19:06:27.369486 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-clustermesh-secrets\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369575 kubelet[2472]: I0209 19:06:27.369519 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-bpf-maps\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369575 kubelet[2472]: I0209 19:06:27.369570 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hubble-tls\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369604 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-xtables-lock\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369637 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-lib-modules\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369667 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cni-path\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369701 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-etc-cni-netd\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369741 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-config-path\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.369792 kubelet[2472]: I0209 19:06:27.369772 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-run\") pod \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\" (UID: \"f5119a16-45fd-41e9-abc9-5a69ccc9dcea\") " Feb 9 19:06:27.370103 kubelet[2472]: I0209 19:06:27.369827 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-cgroup\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.370103 kubelet[2472]: I0209 19:06:27.369859 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.370103 kubelet[2472]: I0209 19:06:27.369892 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.372958 kubelet[2472]: I0209 19:06:27.372921 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4b86996d-34d0-490b-a81c-cbce9843b45a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "4b86996d-34d0-490b-a81c-cbce9843b45a" (UID: "4b86996d-34d0-490b-a81c-cbce9843b45a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:27.374157 kubelet[2472]: I0209 19:06:27.374115 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-kube-api-access-957zj" (OuterVolumeSpecName: "kube-api-access-957zj") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "kube-api-access-957zj". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:27.374270 kubelet[2472]: I0209 19:06:27.374186 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.374270 kubelet[2472]: I0209 19:06:27.374216 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.374270 kubelet[2472]: I0209 19:06:27.374244 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cni-path" (OuterVolumeSpecName: "cni-path") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.374455 kubelet[2472]: I0209 19:06:27.374267 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.374523 kubelet[2472]: W0209 19:06:27.374434 2472 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/f5119a16-45fd-41e9-abc9-5a69ccc9dcea/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:27.377033 kubelet[2472]: I0209 19:06:27.376998 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:27.377124 kubelet[2472]: I0209 19:06:27.377051 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hostproc" (OuterVolumeSpecName: "hostproc") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.377318 kubelet[2472]: I0209 19:06:27.377286 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:27.380343 kubelet[2472]: I0209 19:06:27.380318 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:27.380428 kubelet[2472]: I0209 19:06:27.380318 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4b86996d-34d0-490b-a81c-cbce9843b45a-kube-api-access-nl9j4" (OuterVolumeSpecName: "kube-api-access-nl9j4") pod "4b86996d-34d0-490b-a81c-cbce9843b45a" (UID: "4b86996d-34d0-490b-a81c-cbce9843b45a"). InnerVolumeSpecName "kube-api-access-nl9j4". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:27.380428 kubelet[2472]: I0209 19:06:27.380343 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.380428 kubelet[2472]: I0209 19:06:27.380361 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "f5119a16-45fd-41e9-abc9-5a69ccc9dcea" (UID: "f5119a16-45fd-41e9-abc9-5a69ccc9dcea"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:27.471072 kubelet[2472]: I0209 19:06:27.471014 2472 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471072 kubelet[2472]: I0209 19:06:27.471068 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/4b86996d-34d0-490b-a81c-cbce9843b45a-cilium-config-path\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471072 kubelet[2472]: I0209 19:06:27.471085 2472 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-957zj\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-kube-api-access-957zj\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471102 2472 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-nl9j4\" (UniqueName: \"kubernetes.io/projected/4b86996d-34d0-490b-a81c-cbce9843b45a-kube-api-access-nl9j4\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471119 2472 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-host-proc-sys-net\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471138 2472 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hostproc\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471153 2472 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-clustermesh-secrets\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471169 2472 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-bpf-maps\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471186 2472 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-hubble-tls\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471201 2472 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-xtables-lock\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471387 kubelet[2472]: I0209 19:06:27.471216 2472 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-lib-modules\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471676 kubelet[2472]: I0209 19:06:27.471232 2472 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cni-path\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471676 kubelet[2472]: I0209 19:06:27.471249 2472 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-etc-cni-netd\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471676 kubelet[2472]: I0209 19:06:27.471265 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-config-path\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.471676 kubelet[2472]: I0209 19:06:27.471283 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/f5119a16-45fd-41e9-abc9-5a69ccc9dcea-cilium-run\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:27.980616 kubelet[2472]: I0209 19:06:27.980587 2472 scope.go:115] "RemoveContainer" containerID="ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519" Feb 9 19:06:27.984217 env[1356]: time="2024-02-09T19:06:27.984171644Z" level=info msg="RemoveContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\"" Feb 9 19:06:27.991739 systemd[1]: Removed slice kubepods-besteffort-pod4b86996d_34d0_490b_a81c_cbce9843b45a.slice. Feb 9 19:06:27.994601 systemd[1]: Removed slice kubepods-burstable-podf5119a16_45fd_41e9_abc9_5a69ccc9dcea.slice. Feb 9 19:06:27.994740 systemd[1]: kubepods-burstable-podf5119a16_45fd_41e9_abc9_5a69ccc9dcea.slice: Consumed 7.321s CPU time. Feb 9 19:06:27.999876 env[1356]: time="2024-02-09T19:06:27.999815245Z" level=info msg="RemoveContainer for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" returns successfully" Feb 9 19:06:28.001075 kubelet[2472]: I0209 19:06:28.001049 2472 scope.go:115] "RemoveContainer" containerID="ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519" Feb 9 19:06:28.006177 env[1356]: time="2024-02-09T19:06:28.005961662Z" level=error msg="ContainerStatus for \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\": not found" Feb 9 19:06:28.006677 kubelet[2472]: E0209 19:06:28.006591 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\": not found" containerID="ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519" Feb 9 19:06:28.006677 kubelet[2472]: I0209 19:06:28.006652 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519} err="failed to get container status \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\": rpc error: code = NotFound desc = an error occurred when try to find container \"ccd714f2a52b106b377492357ddd434805bbd19688d1c78de13b1f172401f519\": not found" Feb 9 19:06:28.006677 kubelet[2472]: I0209 19:06:28.006672 2472 scope.go:115] "RemoveContainer" containerID="08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92" Feb 9 19:06:28.010212 env[1356]: time="2024-02-09T19:06:28.010178043Z" level=info msg="RemoveContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\"" Feb 9 19:06:28.016467 env[1356]: time="2024-02-09T19:06:28.016430962Z" level=info msg="RemoveContainer for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" returns successfully" Feb 9 19:06:28.016644 kubelet[2472]: I0209 19:06:28.016618 2472 scope.go:115] "RemoveContainer" containerID="e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace" Feb 9 19:06:28.018124 env[1356]: time="2024-02-09T19:06:28.018089893Z" level=info msg="RemoveContainer for \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\"" Feb 9 19:06:28.024985 env[1356]: time="2024-02-09T19:06:28.024950324Z" level=info msg="RemoveContainer for \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\" returns successfully" Feb 9 19:06:28.025133 kubelet[2472]: I0209 19:06:28.025112 2472 scope.go:115] "RemoveContainer" containerID="813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491" Feb 9 19:06:28.026038 env[1356]: time="2024-02-09T19:06:28.026010144Z" level=info msg="RemoveContainer for \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\"" Feb 9 19:06:28.033369 env[1356]: time="2024-02-09T19:06:28.033332084Z" level=info msg="RemoveContainer for \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\" returns successfully" Feb 9 19:06:28.033503 kubelet[2472]: I0209 19:06:28.033480 2472 scope.go:115] "RemoveContainer" containerID="cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9" Feb 9 19:06:28.034515 env[1356]: time="2024-02-09T19:06:28.034487706Z" level=info msg="RemoveContainer for \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\"" Feb 9 19:06:28.041383 env[1356]: time="2024-02-09T19:06:28.041354036Z" level=info msg="RemoveContainer for \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\" returns successfully" Feb 9 19:06:28.041722 kubelet[2472]: I0209 19:06:28.041519 2472 scope.go:115] "RemoveContainer" containerID="bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41" Feb 9 19:06:28.042857 env[1356]: time="2024-02-09T19:06:28.042788264Z" level=info msg="RemoveContainer for \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\"" Feb 9 19:06:28.049455 env[1356]: time="2024-02-09T19:06:28.049418990Z" level=info msg="RemoveContainer for \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\" returns successfully" Feb 9 19:06:28.049608 kubelet[2472]: I0209 19:06:28.049590 2472 scope.go:115] "RemoveContainer" containerID="08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92" Feb 9 19:06:28.049833 env[1356]: time="2024-02-09T19:06:28.049780697Z" level=error msg="ContainerStatus for \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\": not found" Feb 9 19:06:28.049954 kubelet[2472]: E0209 19:06:28.049935 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\": not found" containerID="08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92" Feb 9 19:06:28.050036 kubelet[2472]: I0209 19:06:28.049971 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92} err="failed to get container status \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\": rpc error: code = NotFound desc = an error occurred when try to find container \"08199ace97e01838772ac42287da3301726678c49e73a1ab127be071f40fab92\": not found" Feb 9 19:06:28.050036 kubelet[2472]: I0209 19:06:28.049988 2472 scope.go:115] "RemoveContainer" containerID="e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace" Feb 9 19:06:28.050208 env[1356]: time="2024-02-09T19:06:28.050157104Z" level=error msg="ContainerStatus for \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\": not found" Feb 9 19:06:28.050319 kubelet[2472]: E0209 19:06:28.050301 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\": not found" containerID="e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace" Feb 9 19:06:28.050390 kubelet[2472]: I0209 19:06:28.050336 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace} err="failed to get container status \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\": rpc error: code = NotFound desc = an error occurred when try to find container \"e04537725f85fdd8ee63fba36517d0436ae4d79d04ed226c24b9dde5d881cace\": not found" Feb 9 19:06:28.050390 kubelet[2472]: I0209 19:06:28.050349 2472 scope.go:115] "RemoveContainer" containerID="813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491" Feb 9 19:06:28.050591 env[1356]: time="2024-02-09T19:06:28.050524111Z" level=error msg="ContainerStatus for \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\": not found" Feb 9 19:06:28.050705 kubelet[2472]: E0209 19:06:28.050688 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\": not found" containerID="813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491" Feb 9 19:06:28.050775 kubelet[2472]: I0209 19:06:28.050719 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491} err="failed to get container status \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\": rpc error: code = NotFound desc = an error occurred when try to find container \"813e89859ddf58e65c31e249128caf76a62b184be07cd37010944e499079b491\": not found" Feb 9 19:06:28.050775 kubelet[2472]: I0209 19:06:28.050734 2472 scope.go:115] "RemoveContainer" containerID="cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9" Feb 9 19:06:28.050948 env[1356]: time="2024-02-09T19:06:28.050903518Z" level=error msg="ContainerStatus for \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\": not found" Feb 9 19:06:28.051060 kubelet[2472]: E0209 19:06:28.051042 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\": not found" containerID="cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9" Feb 9 19:06:28.051130 kubelet[2472]: I0209 19:06:28.051075 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9} err="failed to get container status \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\": rpc error: code = NotFound desc = an error occurred when try to find container \"cdd5d729d82853a84d6a51c4ba3d210e439a7d52a6331bff5f6bf370097103b9\": not found" Feb 9 19:06:28.051130 kubelet[2472]: I0209 19:06:28.051088 2472 scope.go:115] "RemoveContainer" containerID="bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41" Feb 9 19:06:28.051301 env[1356]: time="2024-02-09T19:06:28.051258125Z" level=error msg="ContainerStatus for \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\": not found" Feb 9 19:06:28.051414 kubelet[2472]: E0209 19:06:28.051395 2472 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\": not found" containerID="bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41" Feb 9 19:06:28.051482 kubelet[2472]: I0209 19:06:28.051428 2472 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41} err="failed to get container status \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\": rpc error: code = NotFound desc = an error occurred when try to find container \"bb4342274802e03e6e8f97b120b090de34086359157948e58334c87c5a40db41\": not found" Feb 9 19:06:28.057821 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e-rootfs.mount: Deactivated successfully. Feb 9 19:06:28.057952 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc-rootfs.mount: Deactivated successfully. Feb 9 19:06:28.058032 systemd[1]: var-lib-kubelet-pods-4b86996d\x2d34d0\x2d490b\x2da81c\x2dcbce9843b45a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnl9j4.mount: Deactivated successfully. Feb 9 19:06:28.058113 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-e6e51e5a07a4178b70546177c972e6a1ddd3775f22d5f15dff5db74622ff87fc-shm.mount: Deactivated successfully. Feb 9 19:06:28.058206 systemd[1]: var-lib-kubelet-pods-f5119a16\x2d45fd\x2d41e9\x2dabc9\x2d5a69ccc9dcea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d957zj.mount: Deactivated successfully. Feb 9 19:06:28.058295 systemd[1]: var-lib-kubelet-pods-f5119a16\x2d45fd\x2d41e9\x2dabc9\x2d5a69ccc9dcea-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:28.058383 systemd[1]: var-lib-kubelet-pods-f5119a16\x2d45fd\x2d41e9\x2dabc9\x2d5a69ccc9dcea-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:28.601918 kubelet[2472]: I0209 19:06:28.601877 2472 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=4b86996d-34d0-490b-a81c-cbce9843b45a path="/var/lib/kubelet/pods/4b86996d-34d0-490b-a81c-cbce9843b45a/volumes" Feb 9 19:06:28.602431 kubelet[2472]: I0209 19:06:28.602404 2472 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=f5119a16-45fd-41e9-abc9-5a69ccc9dcea path="/var/lib/kubelet/pods/f5119a16-45fd-41e9-abc9-5a69ccc9dcea/volumes" Feb 9 19:06:29.088355 sshd[3990]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:29.091285 systemd[1]: sshd@20-10.200.8.4:22-10.200.12.6:54164.service: Deactivated successfully. Feb 9 19:06:29.092485 systemd[1]: session-23.scope: Deactivated successfully. Feb 9 19:06:29.093009 systemd-logind[1334]: Session 23 logged out. Waiting for processes to exit. Feb 9 19:06:29.093891 systemd-logind[1334]: Removed session 23. Feb 9 19:06:29.193989 systemd[1]: Started sshd@21-10.200.8.4:22-10.200.12.6:46496.service. Feb 9 19:06:29.726925 kubelet[2472]: E0209 19:06:29.726872 2472 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:29.842554 sshd[4160]: Accepted publickey for core from 10.200.12.6 port 46496 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:29.844002 sshd[4160]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:29.848927 systemd[1]: Started session-24.scope. Feb 9 19:06:29.849370 systemd-logind[1334]: New session 24 of user core. Feb 9 19:06:30.821384 kubelet[2472]: I0209 19:06:30.821338 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821414 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="mount-cgroup" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821427 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="mount-bpf-fs" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821435 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="clean-cilium-state" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821442 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="cilium-agent" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821452 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="apply-sysctl-overwrites" Feb 9 19:06:30.821871 kubelet[2472]: E0209 19:06:30.821461 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="4b86996d-34d0-490b-a81c-cbce9843b45a" containerName="cilium-operator" Feb 9 19:06:30.821871 kubelet[2472]: I0209 19:06:30.821488 2472 memory_manager.go:346] "RemoveStaleState removing state" podUID="f5119a16-45fd-41e9-abc9-5a69ccc9dcea" containerName="cilium-agent" Feb 9 19:06:30.821871 kubelet[2472]: I0209 19:06:30.821497 2472 memory_manager.go:346] "RemoveStaleState removing state" podUID="4b86996d-34d0-490b-a81c-cbce9843b45a" containerName="cilium-operator" Feb 9 19:06:30.827890 systemd[1]: Created slice kubepods-burstable-pod05fcedcc_c0eb_451b_a7e3_487f77d4a50b.slice. Feb 9 19:06:30.926965 sshd[4160]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:30.929892 systemd[1]: sshd@21-10.200.8.4:22-10.200.12.6:46496.service: Deactivated successfully. Feb 9 19:06:30.930888 systemd[1]: session-24.scope: Deactivated successfully. Feb 9 19:06:30.932370 systemd-logind[1334]: Session 24 logged out. Waiting for processes to exit. Feb 9 19:06:30.933732 systemd-logind[1334]: Removed session 24. Feb 9 19:06:30.990088 kubelet[2472]: I0209 19:06:30.990047 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-config-path\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990107 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-lib-modules\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990134 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-kernel\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990160 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-clustermesh-secrets\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990187 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cx8h8\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-kube-api-access-cx8h8\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990212 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-bpf-maps\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990286 kubelet[2472]: I0209 19:06:30.990235 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hostproc\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990260 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-xtables-lock\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990283 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-ipsec-secrets\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990309 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cni-path\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990337 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-net\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990364 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hubble-tls\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990565 kubelet[2472]: I0209 19:06:30.990398 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-cgroup\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990781 kubelet[2472]: I0209 19:06:30.990428 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-run\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:30.990781 kubelet[2472]: I0209 19:06:30.990459 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-etc-cni-netd\") pod \"cilium-kg6wp\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " pod="kube-system/cilium-kg6wp" Feb 9 19:06:31.033738 systemd[1]: Started sshd@22-10.200.8.4:22-10.200.12.6:46498.service. Feb 9 19:06:31.132060 env[1356]: time="2024-02-09T19:06:31.131958539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kg6wp,Uid:05fcedcc-c0eb-451b-a7e3-487f77d4a50b,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:31.167329 env[1356]: time="2024-02-09T19:06:31.167235490Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:31.167329 env[1356]: time="2024-02-09T19:06:31.167270791Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:31.167329 env[1356]: time="2024-02-09T19:06:31.167285491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:31.167623 env[1356]: time="2024-02-09T19:06:31.167435894Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b pid=4183 runtime=io.containerd.runc.v2 Feb 9 19:06:31.179998 systemd[1]: Started cri-containerd-8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b.scope. Feb 9 19:06:31.205542 env[1356]: time="2024-02-09T19:06:31.205497297Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-kg6wp,Uid:05fcedcc-c0eb-451b-a7e3-487f77d4a50b,Namespace:kube-system,Attempt:0,} returns sandbox id \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\"" Feb 9 19:06:31.209939 env[1356]: time="2024-02-09T19:06:31.209901878Z" level=info msg="CreateContainer within sandbox \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:31.239741 env[1356]: time="2024-02-09T19:06:31.239602727Z" level=info msg="CreateContainer within sandbox \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\"" Feb 9 19:06:31.242439 env[1356]: time="2024-02-09T19:06:31.242409479Z" level=info msg="StartContainer for \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\"" Feb 9 19:06:31.263979 systemd[1]: Started cri-containerd-19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83.scope. Feb 9 19:06:31.275119 systemd[1]: cri-containerd-19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83.scope: Deactivated successfully. Feb 9 19:06:31.341216 env[1356]: time="2024-02-09T19:06:31.341159402Z" level=info msg="shim disconnected" id=19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83 Feb 9 19:06:31.341216 env[1356]: time="2024-02-09T19:06:31.341213003Z" level=warning msg="cleaning up after shim disconnected" id=19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83 namespace=k8s.io Feb 9 19:06:31.341216 env[1356]: time="2024-02-09T19:06:31.341223904Z" level=info msg="cleaning up dead shim" Feb 9 19:06:31.349613 env[1356]: time="2024-02-09T19:06:31.349526657Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4241 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:06:31Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:06:31.349928 env[1356]: time="2024-02-09T19:06:31.349824562Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Feb 9 19:06:31.354817 env[1356]: time="2024-02-09T19:06:31.354761054Z" level=error msg="Failed to pipe stdout of container \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\"" error="reading from a closed fifo" Feb 9 19:06:31.357300 env[1356]: time="2024-02-09T19:06:31.357255900Z" level=error msg="Failed to pipe stderr of container \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\"" error="reading from a closed fifo" Feb 9 19:06:31.361158 env[1356]: time="2024-02-09T19:06:31.361103071Z" level=error msg="StartContainer for \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:06:31.361392 kubelet[2472]: E0209 19:06:31.361370 2472 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83" Feb 9 19:06:31.363038 kubelet[2472]: E0209 19:06:31.361744 2472 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:06:31.363038 kubelet[2472]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:06:31.363038 kubelet[2472]: rm /hostbin/cilium-mount Feb 9 19:06:31.363226 kubelet[2472]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cx8h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-kg6wp_kube-system(05fcedcc-c0eb-451b-a7e3-487f77d4a50b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:06:31.363226 kubelet[2472]: E0209 19:06:31.361824 2472 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kg6wp" podUID=05fcedcc-c0eb-451b-a7e3-487f77d4a50b Feb 9 19:06:31.652415 sshd[4170]: Accepted publickey for core from 10.200.12.6 port 46498 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:31.653963 sshd[4170]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:31.658609 systemd-logind[1334]: New session 25 of user core. Feb 9 19:06:31.659349 systemd[1]: Started session-25.scope. Feb 9 19:06:32.013395 env[1356]: time="2024-02-09T19:06:32.012383797Z" level=info msg="CreateContainer within sandbox \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:1,}" Feb 9 19:06:32.046900 env[1356]: time="2024-02-09T19:06:32.046845627Z" level=info msg="CreateContainer within sandbox \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" for &ContainerMetadata{Name:mount-cgroup,Attempt:1,} returns container id \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\"" Feb 9 19:06:32.048450 env[1356]: time="2024-02-09T19:06:32.047619141Z" level=info msg="StartContainer for \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\"" Feb 9 19:06:32.076749 systemd[1]: Started cri-containerd-4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b.scope. Feb 9 19:06:32.093434 systemd[1]: cri-containerd-4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b.scope: Deactivated successfully. Feb 9 19:06:32.106023 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b-rootfs.mount: Deactivated successfully. Feb 9 19:06:32.123156 env[1356]: time="2024-02-09T19:06:32.123091121Z" level=info msg="shim disconnected" id=4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b Feb 9 19:06:32.123156 env[1356]: time="2024-02-09T19:06:32.123158222Z" level=warning msg="cleaning up after shim disconnected" id=4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b namespace=k8s.io Feb 9 19:06:32.123478 env[1356]: time="2024-02-09T19:06:32.123169523Z" level=info msg="cleaning up dead shim" Feb 9 19:06:32.139084 env[1356]: time="2024-02-09T19:06:32.139020812Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4287 runtime=io.containerd.runc.v2\ntime=\"2024-02-09T19:06:32Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" Feb 9 19:06:32.139495 env[1356]: time="2024-02-09T19:06:32.139335518Z" level=error msg="copy shim log" error="read /proc/self/fd/36: file already closed" Feb 9 19:06:32.141827 env[1356]: time="2024-02-09T19:06:32.141770563Z" level=error msg="Failed to pipe stdout of container \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\"" error="reading from a closed fifo" Feb 9 19:06:32.141968 env[1356]: time="2024-02-09T19:06:32.141848564Z" level=error msg="Failed to pipe stderr of container \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\"" error="reading from a closed fifo" Feb 9 19:06:32.146099 env[1356]: time="2024-02-09T19:06:32.146052841Z" level=error msg="StartContainer for \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\" failed" error="failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" Feb 9 19:06:32.146531 kubelet[2472]: E0209 19:06:32.146349 2472 remote_runtime.go:326] "StartContainer from runtime service failed" err="rpc error: code = Unknown desc = failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown" containerID="4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b" Feb 9 19:06:32.146531 kubelet[2472]: E0209 19:06:32.146462 2472 kuberuntime_manager.go:1212] init container &Container{Name:mount-cgroup,Image:quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Command:[sh -ec cp /usr/bin/cilium-mount /hostbin/cilium-mount; Feb 9 19:06:32.146531 kubelet[2472]: nsenter --cgroup=/hostproc/1/ns/cgroup --mount=/hostproc/1/ns/mnt "${BIN_PATH}/cilium-mount" $CGROUP_ROOT; Feb 9 19:06:32.146531 kubelet[2472]: rm /hostbin/cilium-mount Feb 9 19:06:32.146531 kubelet[2472]: ],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:CGROUP_ROOT,Value:/run/cilium/cgroupv2,ValueFrom:nil,},EnvVar{Name:BIN_PATH,Value:/opt/cni/bin,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:hostproc,ReadOnly:false,MountPath:/hostproc,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:cni-path,ReadOnly:false,MountPath:/hostbin,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-cx8h8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[SYS_ADMIN SYS_CHROOT SYS_PTRACE],Drop:[ALL],},Privileged:nil,SELinuxOptions:&SELinuxOptions{User:,Role:,Type:spc_t,Level:s0,},RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},} start failed in pod cilium-kg6wp_kube-system(05fcedcc-c0eb-451b-a7e3-487f77d4a50b): RunContainerError: failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown Feb 9 19:06:32.146531 kubelet[2472]: E0209 19:06:32.146506 2472 pod_workers.go:1294] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"mount-cgroup\" with RunContainerError: \"failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: write /proc/self/attr/keycreate: invalid argument: unknown\"" pod="kube-system/cilium-kg6wp" podUID=05fcedcc-c0eb-451b-a7e3-487f77d4a50b Feb 9 19:06:32.237468 sshd[4170]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:32.240800 systemd[1]: sshd@22-10.200.8.4:22-10.200.12.6:46498.service: Deactivated successfully. Feb 9 19:06:32.242303 systemd[1]: session-25.scope: Deactivated successfully. Feb 9 19:06:32.242883 systemd-logind[1334]: Session 25 logged out. Waiting for processes to exit. Feb 9 19:06:32.243958 systemd-logind[1334]: Removed session 25. Feb 9 19:06:32.342729 systemd[1]: Started sshd@23-10.200.8.4:22-10.200.12.6:46508.service. Feb 9 19:06:32.963328 sshd[4301]: Accepted publickey for core from 10.200.12.6 port 46508 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:06:32.964739 sshd[4301]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:06:32.969664 systemd-logind[1334]: New session 26 of user core. Feb 9 19:06:32.970135 systemd[1]: Started session-26.scope. Feb 9 19:06:33.003514 kubelet[2472]: I0209 19:06:33.003339 2472 scope.go:115] "RemoveContainer" containerID="19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83" Feb 9 19:06:33.004919 env[1356]: time="2024-02-09T19:06:33.003777522Z" level=info msg="StopPodSandbox for \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\"" Feb 9 19:06:33.004919 env[1356]: time="2024-02-09T19:06:33.003837224Z" level=info msg="Container to stop \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:33.004919 env[1356]: time="2024-02-09T19:06:33.003855224Z" level=info msg="Container to stop \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:06:33.008307 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b-shm.mount: Deactivated successfully. Feb 9 19:06:33.011591 env[1356]: time="2024-02-09T19:06:33.011540963Z" level=info msg="RemoveContainer for \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\"" Feb 9 19:06:33.018797 systemd[1]: cri-containerd-8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b.scope: Deactivated successfully. Feb 9 19:06:33.023934 env[1356]: time="2024-02-09T19:06:33.023883386Z" level=info msg="RemoveContainer for \"19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83\" returns successfully" Feb 9 19:06:33.055352 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b-rootfs.mount: Deactivated successfully. Feb 9 19:06:33.073361 env[1356]: time="2024-02-09T19:06:33.073304781Z" level=info msg="shim disconnected" id=8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b Feb 9 19:06:33.073690 env[1356]: time="2024-02-09T19:06:33.073653687Z" level=warning msg="cleaning up after shim disconnected" id=8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b namespace=k8s.io Feb 9 19:06:33.073776 env[1356]: time="2024-02-09T19:06:33.073690188Z" level=info msg="cleaning up dead shim" Feb 9 19:06:33.083472 env[1356]: time="2024-02-09T19:06:33.083413764Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4324 runtime=io.containerd.runc.v2\n" Feb 9 19:06:33.083830 env[1356]: time="2024-02-09T19:06:33.083792771Z" level=info msg="TearDown network for sandbox \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" successfully" Feb 9 19:06:33.083934 env[1356]: time="2024-02-09T19:06:33.083830271Z" level=info msg="StopPodSandbox for \"8223fd69e17f954b6ef0cd5e0a1bbac727132879b8ab6476f642aef80052fd3b\" returns successfully" Feb 9 19:06:33.202229 kubelet[2472]: I0209 19:06:33.202175 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-bpf-maps\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202246 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-xtables-lock\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202290 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-ipsec-secrets\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202335 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hostproc\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202368 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-etc-cni-netd\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202424 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-clustermesh-secrets\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202475 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-run\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202511 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hubble-tls\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202579 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-cgroup\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202621 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-config-path\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202670 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-kernel\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202707 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cx8h8\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-kube-api-access-cx8h8\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202751 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-net\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202784 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-lib-modules\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.202858 kubelet[2472]: I0209 19:06:33.202831 2472 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cni-path\") pod \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\" (UID: \"05fcedcc-c0eb-451b-a7e3-487f77d4a50b\") " Feb 9 19:06:33.203664 kubelet[2472]: I0209 19:06:33.202939 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cni-path" (OuterVolumeSpecName: "cni-path") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.203664 kubelet[2472]: I0209 19:06:33.203002 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.203664 kubelet[2472]: I0209 19:06:33.203030 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.206573 kubelet[2472]: I0209 19:06:33.203892 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.206573 kubelet[2472]: I0209 19:06:33.203946 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hostproc" (OuterVolumeSpecName: "hostproc") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.206573 kubelet[2472]: I0209 19:06:33.203975 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.206573 kubelet[2472]: W0209 19:06:33.204129 2472 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/05fcedcc-c0eb-451b-a7e3-487f77d4a50b/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:06:33.206900 kubelet[2472]: I0209 19:06:33.206710 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.207125 kubelet[2472]: I0209 19:06:33.207086 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.207435 kubelet[2472]: I0209 19:06:33.207402 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.207541 kubelet[2472]: I0209 19:06:33.207456 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:06:33.208305 kubelet[2472]: I0209 19:06:33.208275 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:06:33.213056 systemd[1]: var-lib-kubelet-pods-05fcedcc\x2dc0eb\x2d451b\x2da7e3\x2d487f77d4a50b-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:33.217953 kubelet[2472]: I0209 19:06:33.216702 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:33.217070 systemd[1]: var-lib-kubelet-pods-05fcedcc\x2dc0eb\x2d451b\x2da7e3\x2d487f77d4a50b-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:06:33.218539 kubelet[2472]: I0209 19:06:33.218511 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:33.219417 systemd[1]: var-lib-kubelet-pods-05fcedcc\x2dc0eb\x2d451b\x2da7e3\x2d487f77d4a50b-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:06:33.221947 kubelet[2472]: I0209 19:06:33.221919 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-kube-api-access-cx8h8" (OuterVolumeSpecName: "kube-api-access-cx8h8") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "kube-api-access-cx8h8". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:06:33.222681 kubelet[2472]: I0209 19:06:33.222654 2472 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "05fcedcc-c0eb-451b-a7e3-487f77d4a50b" (UID: "05fcedcc-c0eb-451b-a7e3-487f77d4a50b"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:06:33.224934 systemd[1]: var-lib-kubelet-pods-05fcedcc\x2dc0eb\x2d451b\x2da7e3\x2d487f77d4a50b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcx8h8.mount: Deactivated successfully. Feb 9 19:06:33.303335 kubelet[2472]: I0209 19:06:33.303297 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-config-path\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.303602 kubelet[2472]: I0209 19:06:33.303587 2472 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hubble-tls\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.303714 kubelet[2472]: I0209 19:06:33.303704 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-cgroup\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.303837 kubelet[2472]: I0209 19:06:33.303826 2472 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-kernel\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.303946 kubelet[2472]: I0209 19:06:33.303937 2472 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cx8h8\" (UniqueName: \"kubernetes.io/projected/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-kube-api-access-cx8h8\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304031 kubelet[2472]: I0209 19:06:33.304022 2472 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-host-proc-sys-net\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304096 kubelet[2472]: I0209 19:06:33.304089 2472 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-lib-modules\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304157 kubelet[2472]: I0209 19:06:33.304150 2472 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cni-path\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304220 kubelet[2472]: I0209 19:06:33.304212 2472 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-bpf-maps\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304366 kubelet[2472]: I0209 19:06:33.304357 2472 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-xtables-lock\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304462 kubelet[2472]: I0209 19:06:33.304451 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-ipsec-secrets\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304561 kubelet[2472]: I0209 19:06:33.304538 2472 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-hostproc\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304654 kubelet[2472]: I0209 19:06:33.304644 2472 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-etc-cni-netd\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304755 kubelet[2472]: I0209 19:06:33.304746 2472 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-clustermesh-secrets\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:33.304830 kubelet[2472]: I0209 19:06:33.304822 2472 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/05fcedcc-c0eb-451b-a7e3-487f77d4a50b-cilium-run\") on node \"ci-3510.3.2-a-54659eee1f\" DevicePath \"\"" Feb 9 19:06:34.006147 kubelet[2472]: I0209 19:06:34.006117 2472 scope.go:115] "RemoveContainer" containerID="4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b" Feb 9 19:06:34.011068 systemd[1]: Removed slice kubepods-burstable-pod05fcedcc_c0eb_451b_a7e3_487f77d4a50b.slice. Feb 9 19:06:34.013180 env[1356]: time="2024-02-09T19:06:34.012612081Z" level=info msg="RemoveContainer for \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\"" Feb 9 19:06:34.020204 env[1356]: time="2024-02-09T19:06:34.020153316Z" level=info msg="RemoveContainer for \"4d8b1fabb4950b97b10929bedd0ec33f2213d81377b32427c333b3073792824b\" returns successfully" Feb 9 19:06:34.068507 kubelet[2472]: I0209 19:06:34.068466 2472 topology_manager.go:212] "Topology Admit Handler" Feb 9 19:06:34.068723 kubelet[2472]: E0209 19:06:34.068541 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05fcedcc-c0eb-451b-a7e3-487f77d4a50b" containerName="mount-cgroup" Feb 9 19:06:34.068723 kubelet[2472]: E0209 19:06:34.068564 2472 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="05fcedcc-c0eb-451b-a7e3-487f77d4a50b" containerName="mount-cgroup" Feb 9 19:06:34.068723 kubelet[2472]: I0209 19:06:34.068590 2472 memory_manager.go:346] "RemoveStaleState removing state" podUID="05fcedcc-c0eb-451b-a7e3-487f77d4a50b" containerName="mount-cgroup" Feb 9 19:06:34.068723 kubelet[2472]: I0209 19:06:34.068601 2472 memory_manager.go:346] "RemoveStaleState removing state" podUID="05fcedcc-c0eb-451b-a7e3-487f77d4a50b" containerName="mount-cgroup" Feb 9 19:06:34.075253 systemd[1]: Created slice kubepods-burstable-podb72ad964_e936_44d3_9f2e_78705442a532.slice. Feb 9 19:06:34.114347 kubelet[2472]: I0209 19:06:34.114313 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-lib-modules\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114617 kubelet[2472]: I0209 19:06:34.114603 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-cilium-cgroup\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114723 kubelet[2472]: I0209 19:06:34.114715 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-cni-path\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114792 kubelet[2472]: I0209 19:06:34.114786 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/b72ad964-e936-44d3-9f2e-78705442a532-hubble-tls\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114860 kubelet[2472]: I0209 19:06:34.114854 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnvzv\" (UniqueName: \"kubernetes.io/projected/b72ad964-e936-44d3-9f2e-78705442a532-kube-api-access-mnvzv\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114928 kubelet[2472]: I0209 19:06:34.114911 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-hostproc\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.114982 kubelet[2472]: I0209 19:06:34.114977 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-etc-cni-netd\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115098 kubelet[2472]: I0209 19:06:34.115092 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/b72ad964-e936-44d3-9f2e-78705442a532-clustermesh-secrets\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115161 kubelet[2472]: I0209 19:06:34.115156 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-host-proc-sys-net\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115216 kubelet[2472]: I0209 19:06:34.115210 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-xtables-lock\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115279 kubelet[2472]: I0209 19:06:34.115274 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-host-proc-sys-kernel\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115347 kubelet[2472]: I0209 19:06:34.115341 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/b72ad964-e936-44d3-9f2e-78705442a532-cilium-config-path\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115413 kubelet[2472]: I0209 19:06:34.115400 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-bpf-maps\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115469 kubelet[2472]: I0209 19:06:34.115463 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/b72ad964-e936-44d3-9f2e-78705442a532-cilium-ipsec-secrets\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.115532 kubelet[2472]: I0209 19:06:34.115526 2472 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/b72ad964-e936-44d3-9f2e-78705442a532-cilium-run\") pod \"cilium-cgbjf\" (UID: \"b72ad964-e936-44d3-9f2e-78705442a532\") " pod="kube-system/cilium-cgbjf" Feb 9 19:06:34.380484 env[1356]: time="2024-02-09T19:06:34.380342171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgbjf,Uid:b72ad964-e936-44d3-9f2e-78705442a532,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:34.413650 env[1356]: time="2024-02-09T19:06:34.413573766Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:34.413650 env[1356]: time="2024-02-09T19:06:34.413612167Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:34.413915 env[1356]: time="2024-02-09T19:06:34.413627367Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:34.413915 env[1356]: time="2024-02-09T19:06:34.413831471Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82 pid=4359 runtime=io.containerd.runc.v2 Feb 9 19:06:34.427303 systemd[1]: Started cri-containerd-9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82.scope. Feb 9 19:06:34.447616 kubelet[2472]: W0209 19:06:34.447556 2472 manager.go:1159] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05fcedcc_c0eb_451b_a7e3_487f77d4a50b.slice/cri-containerd-19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83.scope WatchSource:0}: container "19637b5bc82d751f7597c80fa498d273395e52e689fa87b32e14ab5b59c73d83" in namespace "k8s.io": not found Feb 9 19:06:34.461046 env[1356]: time="2024-02-09T19:06:34.461004116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-cgbjf,Uid:b72ad964-e936-44d3-9f2e-78705442a532,Namespace:kube-system,Attempt:0,} returns sandbox id \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\"" Feb 9 19:06:34.464782 env[1356]: time="2024-02-09T19:06:34.464752883Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:34.491868 env[1356]: time="2024-02-09T19:06:34.491820668Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3\"" Feb 9 19:06:34.492785 env[1356]: time="2024-02-09T19:06:34.492753785Z" level=info msg="StartContainer for \"3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3\"" Feb 9 19:06:34.511235 systemd[1]: Started cri-containerd-3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3.scope. Feb 9 19:06:34.555859 env[1356]: time="2024-02-09T19:06:34.555807215Z" level=info msg="StartContainer for \"3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3\" returns successfully" Feb 9 19:06:34.560917 systemd[1]: cri-containerd-3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3.scope: Deactivated successfully. Feb 9 19:06:34.603500 kubelet[2472]: I0209 19:06:34.603465 2472 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID=05fcedcc-c0eb-451b-a7e3-487f77d4a50b path="/var/lib/kubelet/pods/05fcedcc-c0eb-451b-a7e3-487f77d4a50b/volumes" Feb 9 19:06:34.629468 env[1356]: time="2024-02-09T19:06:34.629409834Z" level=info msg="shim disconnected" id=3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3 Feb 9 19:06:34.629468 env[1356]: time="2024-02-09T19:06:34.629471635Z" level=warning msg="cleaning up after shim disconnected" id=3238b1f30ec2cdf702d32545f8ef79167d5345ad867633333823beedfa2331d3 namespace=k8s.io Feb 9 19:06:34.629751 env[1356]: time="2024-02-09T19:06:34.629482435Z" level=info msg="cleaning up dead shim" Feb 9 19:06:34.639136 env[1356]: time="2024-02-09T19:06:34.639019906Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:34Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4441 runtime=io.containerd.runc.v2\n" Feb 9 19:06:34.727698 kubelet[2472]: E0209 19:06:34.727654 2472 kubelet.go:2760] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:06:35.013392 env[1356]: time="2024-02-09T19:06:35.013338313Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:06:35.046563 env[1356]: time="2024-02-09T19:06:35.046500401Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6\"" Feb 9 19:06:35.047552 env[1356]: time="2024-02-09T19:06:35.047510119Z" level=info msg="StartContainer for \"3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6\"" Feb 9 19:06:35.065964 systemd[1]: Started cri-containerd-3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6.scope. Feb 9 19:06:35.113428 env[1356]: time="2024-02-09T19:06:35.113368087Z" level=info msg="StartContainer for \"3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6\" returns successfully" Feb 9 19:06:35.113924 systemd[1]: cri-containerd-3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6.scope: Deactivated successfully. Feb 9 19:06:35.144612 env[1356]: time="2024-02-09T19:06:35.144516340Z" level=info msg="shim disconnected" id=3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6 Feb 9 19:06:35.144612 env[1356]: time="2024-02-09T19:06:35.144612342Z" level=warning msg="cleaning up after shim disconnected" id=3907b006b57d3eaf61c22aae07beb1de2a770eefb4c208b00358f1a6f0c773a6 namespace=k8s.io Feb 9 19:06:35.144946 env[1356]: time="2024-02-09T19:06:35.144626442Z" level=info msg="cleaning up dead shim" Feb 9 19:06:35.152134 env[1356]: time="2024-02-09T19:06:35.152089674Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:35Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4505 runtime=io.containerd.runc.v2\n" Feb 9 19:06:36.017747 env[1356]: time="2024-02-09T19:06:36.017690530Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:06:36.060013 env[1356]: time="2024-02-09T19:06:36.059955673Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a\"" Feb 9 19:06:36.060704 env[1356]: time="2024-02-09T19:06:36.060653785Z" level=info msg="StartContainer for \"dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a\"" Feb 9 19:06:36.087798 systemd[1]: Started cri-containerd-dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a.scope. Feb 9 19:06:36.122426 systemd[1]: cri-containerd-dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a.scope: Deactivated successfully. Feb 9 19:06:36.124794 env[1356]: time="2024-02-09T19:06:36.124751111Z" level=info msg="StartContainer for \"dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a\" returns successfully" Feb 9 19:06:36.155714 env[1356]: time="2024-02-09T19:06:36.155659854Z" level=info msg="shim disconnected" id=dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a Feb 9 19:06:36.156001 env[1356]: time="2024-02-09T19:06:36.155969659Z" level=warning msg="cleaning up after shim disconnected" id=dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a namespace=k8s.io Feb 9 19:06:36.156001 env[1356]: time="2024-02-09T19:06:36.155994860Z" level=info msg="cleaning up dead shim" Feb 9 19:06:36.164768 env[1356]: time="2024-02-09T19:06:36.164727413Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:36Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4561 runtime=io.containerd.runc.v2\n" Feb 9 19:06:36.225220 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dbed2a73dd22ac083c8555e05dd04035b4805cee1e9323b5579a86678178675a-rootfs.mount: Deactivated successfully. Feb 9 19:06:37.025924 env[1356]: time="2024-02-09T19:06:37.025850938Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:06:37.065715 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4016021673.mount: Deactivated successfully. Feb 9 19:06:37.068628 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount258667885.mount: Deactivated successfully. Feb 9 19:06:37.076065 env[1356]: time="2024-02-09T19:06:37.076009511Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912\"" Feb 9 19:06:37.076696 env[1356]: time="2024-02-09T19:06:37.076663622Z" level=info msg="StartContainer for \"8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912\"" Feb 9 19:06:37.097787 systemd[1]: Started cri-containerd-8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912.scope. Feb 9 19:06:37.128583 systemd[1]: cri-containerd-8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912.scope: Deactivated successfully. Feb 9 19:06:37.131150 env[1356]: time="2024-02-09T19:06:37.131068969Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb72ad964_e936_44d3_9f2e_78705442a532.slice/cri-containerd-8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912.scope/memory.events\": no such file or directory" Feb 9 19:06:37.135894 env[1356]: time="2024-02-09T19:06:37.135854552Z" level=info msg="StartContainer for \"8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912\" returns successfully" Feb 9 19:06:37.167168 env[1356]: time="2024-02-09T19:06:37.167114496Z" level=info msg="shim disconnected" id=8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912 Feb 9 19:06:37.167168 env[1356]: time="2024-02-09T19:06:37.167167997Z" level=warning msg="cleaning up after shim disconnected" id=8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912 namespace=k8s.io Feb 9 19:06:37.167481 env[1356]: time="2024-02-09T19:06:37.167178797Z" level=info msg="cleaning up dead shim" Feb 9 19:06:37.175158 env[1356]: time="2024-02-09T19:06:37.175106635Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:37Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4620 runtime=io.containerd.runc.v2\n" Feb 9 19:06:37.225604 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8ebb42c9bf13a89baa0bb3544e133a8c40e6fe6a13692e2ee155cb90c6698912-rootfs.mount: Deactivated successfully. Feb 9 19:06:38.026802 env[1356]: time="2024-02-09T19:06:38.026744346Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:06:38.066582 env[1356]: time="2024-02-09T19:06:38.066516632Z" level=info msg="CreateContainer within sandbox \"9db85d1a8c9b8f19b0997f1944d7727765513381671d961c4d508fb963560e82\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c\"" Feb 9 19:06:38.068587 env[1356]: time="2024-02-09T19:06:38.067269844Z" level=info msg="StartContainer for \"4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c\"" Feb 9 19:06:38.097174 systemd[1]: Started cri-containerd-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c.scope. Feb 9 19:06:38.146787 env[1356]: time="2024-02-09T19:06:38.146732113Z" level=info msg="StartContainer for \"4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c\" returns successfully" Feb 9 19:06:38.178149 kubelet[2472]: I0209 19:06:38.178108 2472 setters.go:548] "Node became not ready" node="ci-3510.3.2-a-54659eee1f" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:06:38.178029953 +0000 UTC m=+163.772686501 LastTransitionTime:2024-02-09 19:06:38.178029953 +0000 UTC m=+163.772686501 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:06:38.638583 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:06:39.487531 systemd[1]: run-containerd-runc-k8s.io-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c-runc.zL6JHa.mount: Deactivated successfully. Feb 9 19:06:41.218988 systemd-networkd[1492]: lxc_health: Link UP Feb 9 19:06:41.270025 systemd-networkd[1492]: lxc_health: Gained carrier Feb 9 19:06:41.270633 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:06:41.664117 systemd[1]: run-containerd-runc-k8s.io-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c-runc.N6fMda.mount: Deactivated successfully. Feb 9 19:06:42.406904 kubelet[2472]: I0209 19:06:42.406865 2472 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-cgbjf" podStartSLOduration=8.406822155 podCreationTimestamp="2024-02-09 19:06:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:39.042927346 +0000 UTC m=+164.637583894" watchObservedRunningTime="2024-02-09 19:06:42.406822155 +0000 UTC m=+168.001478803" Feb 9 19:06:43.072797 systemd-networkd[1492]: lxc_health: Gained IPv6LL Feb 9 19:06:43.954247 systemd[1]: run-containerd-runc-k8s.io-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c-runc.ctp42e.mount: Deactivated successfully. Feb 9 19:06:46.102390 systemd[1]: run-containerd-runc-k8s.io-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c-runc.qBE3NM.mount: Deactivated successfully. Feb 9 19:06:48.258707 systemd[1]: run-containerd-runc-k8s.io-4357c894f5acfb50e35fb979ac261c98f7a55f23c7ac6f102669278a966fc78c-runc.cw3soi.mount: Deactivated successfully. Feb 9 19:06:48.426161 sshd[4301]: pam_unix(sshd:session): session closed for user core Feb 9 19:06:48.429417 systemd[1]: sshd@23-10.200.8.4:22-10.200.12.6:46508.service: Deactivated successfully. Feb 9 19:06:48.430335 systemd[1]: session-26.scope: Deactivated successfully. Feb 9 19:06:48.431133 systemd-logind[1334]: Session 26 logged out. Waiting for processes to exit. Feb 9 19:06:48.432002 systemd-logind[1334]: Removed session 26. Feb 9 19:06:52.551399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.563870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.577715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.590368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.603577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.620436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.625475 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.630618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.630764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.630898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.640151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.649942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.654909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.664364 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.673942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.679292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.686363 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.686531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.686689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.686825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.686956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.696186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.721619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.721831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.721966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.722101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.722232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.722360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.731667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.731970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.742107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.747793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.748040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.763683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.791611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.791833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.791973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.792110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.792244 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.792376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.792515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.801425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.812173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.812425 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.817669 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.827870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.847697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.847953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.848093 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.848235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.848360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.848472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.863634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.863900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.864047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.873662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.873911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.883449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.883737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.893435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.905584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.905737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.905875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.915303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.930715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.930882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.931018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.931151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.940835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.957320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.957486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.957635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.957768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.967332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.982821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:52.992619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.011743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.020834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.020970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.021108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.021237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.021367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.021501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.021667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.027203 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.034143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.034302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.044858 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.045117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.053955 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.054215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.064563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.079292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.079525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.079682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.079817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.094740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.127798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133783 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.133916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.134048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.143535 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.143872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.153265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.153500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.163042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.186988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.187247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.187433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.187635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.187826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.188000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.197432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.225320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.235249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.245357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.245747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.245910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.246047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.246178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.246306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.246435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.246576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.255826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.256051 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.266585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.271828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.272042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.282085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.287091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.287225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.296679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.302173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.307261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.317021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.317186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.317320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.326679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.339715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.344821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.344978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.345133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.359695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.359973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.360114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.369260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.374314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.384986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.401571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.401791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.401936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.402066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.402197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.413958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.431677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.453834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454670 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.454836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.465739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.481074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.511427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.511645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.511776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.511890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.511996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.512101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.512208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.540927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.574614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.574813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.574927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.575036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.575141 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.575243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.674798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.675362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.676246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.689155 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.726557 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.726788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.726902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.727015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.727124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.727230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.745895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.746190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.761418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.773028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.773146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.773260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.773369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.785504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.785773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.805076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.805658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.815540 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.844793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845007 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845304 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.845876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.850467 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.859933 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.860197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.870688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.870961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.880611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.880854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.897935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.898230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.898396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.903039 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.912451 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.912681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.922597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.922794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.932471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.932798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.941787 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.942003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.951703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.951919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.961403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.962519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.976319 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.977456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.977621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.981431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.991071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:53.991286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.003562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.003771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.013410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.013676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.023006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.023222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.032916 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.033126 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.043514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.043753 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.053444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.053667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.068647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.078470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.078690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.078818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.078931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.083589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.098169 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.098605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.144743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.144964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145368 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.145887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.153974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.159369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.159644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.169238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.169498 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.178976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.179219 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.188795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.189044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.198462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.198738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.219292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.219902 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.220116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.229654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.259243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.259456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.259628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.259760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.259885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.260010 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.260139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.269035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.269468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.274087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.284261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.284486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.296378 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.296654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.305770 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.306011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.315411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.315652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.327360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.327639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.332610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.342238 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.342529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.347245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.357435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.357662 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.367438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.367683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.377018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.377224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.386800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.387009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.400070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.400321 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.410635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.410874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.420716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.420936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.430103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.430315 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.446587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.446906 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.447053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.456273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.462042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.462293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.472136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.472380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.481869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.482115 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.492176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.492398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.509410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.509652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.509796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.517391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.527982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.612739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.613308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.613606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.613747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.613869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.613979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614086 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.614923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#244 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.615046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.615153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.573640383Z" level=info msg="StopPodSandbox for \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\"" Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.573861386Z" level=info msg="TearDown network for sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" successfully" Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.573936188Z" level=info msg="StopPodSandbox for \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" returns successfully" Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.574590097Z" level=info msg="RemovePodSandbox for \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\"" Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.574616898Z" level=info msg="Forcibly stopping sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\"" Feb 9 19:06:54.615260 env[1356]: time="2024-02-09T19:06:54.574762500Z" level=info msg="TearDown network for sandbox \"3ab9c409083e79143de3fd25cbde3861c21e8abd5bcd9e7d067c9298b9cd506e\" successfully" Feb 9 19:06:54.626043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.626264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.636038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#244 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.636280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.640759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.652392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.652710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.664358 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.664632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.674111 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#244 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.674362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.684015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.684274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.689334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.708223 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#246 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.708474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#245 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.708625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#244 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.717647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#243 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.717894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.728458 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#241 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.728715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#240 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:06:54.733817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#242 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001