Feb 9 19:24:48.018132 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:24:48.018173 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:48.018182 kernel: BIOS-provided physical RAM map: Feb 9 19:24:48.018191 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:24:48.018196 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:24:48.018205 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:24:48.018216 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:24:48.018223 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:24:48.018231 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:24:48.018237 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:24:48.018244 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:24:48.018252 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:24:48.018257 kernel: NX (Execute Disable) protection: active Feb 9 19:24:48.018263 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:24:48.018276 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:24:48.018285 kernel: random: crng init done Feb 9 19:24:48.018292 kernel: SMBIOS 3.1.0 present. Feb 9 19:24:48.018301 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:24:48.018308 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:24:48.018321 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:24:48.018328 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:24:48.018334 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:24:48.018342 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:24:48.018348 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:24:48.018355 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:24:48.018361 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:24:48.018368 kernel: tsc: Detected 2593.906 MHz processor Feb 9 19:24:48.018374 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:24:48.018381 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:24:48.018387 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:24:48.018393 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:24:48.018399 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:24:48.018407 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:24:48.018416 kernel: Using GB pages for direct mapping Feb 9 19:24:48.018424 kernel: Secure boot disabled Feb 9 19:24:48.018432 kernel: ACPI: Early table checksum verification disabled Feb 9 19:24:48.018439 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:24:48.018445 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018453 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018461 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:24:48.018473 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:24:48.018482 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018489 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018498 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018506 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018513 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018525 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018532 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:24:48.018539 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:24:48.018549 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:24:48.018556 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:24:48.018564 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:24:48.018573 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:24:48.018580 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:24:48.018591 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:24:48.018598 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:24:48.018607 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:24:48.018615 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:24:48.018622 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:24:48.018632 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:24:48.018639 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:24:48.018647 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:24:48.018656 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:24:48.018664 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:24:48.018674 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:24:48.018681 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:24:48.018689 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:24:48.018698 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:24:48.018705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:24:48.018713 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:24:48.018722 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:24:48.018728 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:24:48.018740 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:24:48.018747 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:24:48.018755 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:24:48.018764 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:24:48.018773 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:24:48.018781 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:24:48.018791 kernel: Zone ranges: Feb 9 19:24:48.018799 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:24:48.018807 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:24:48.018818 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:24:48.018828 kernel: Movable zone start for each node Feb 9 19:24:48.018835 kernel: Early memory node ranges Feb 9 19:24:48.018844 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:24:48.018852 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:24:48.018859 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:24:48.018869 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:24:48.018875 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:24:48.018884 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:24:48.018895 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:24:48.018903 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:24:48.018911 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:24:48.018918 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:24:48.018928 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:24:48.018935 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:24:48.018942 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:24:48.018952 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:24:48.018958 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:24:48.018967 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:24:48.018976 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:24:48.018984 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:24:48.018991 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:24:48.018999 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:24:48.019008 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:24:48.019014 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:24:48.019025 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:24:48.019031 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:24:48.019044 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:24:48.019050 kernel: Policy zone: Normal Feb 9 19:24:48.019059 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:48.019068 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:24:48.019075 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:24:48.019085 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:24:48.019092 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:24:48.019101 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:24:48.019111 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:24:48.019118 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:24:48.019141 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:24:48.019153 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:24:48.019164 kernel: rcu: RCU event tracing is enabled. Feb 9 19:24:48.019174 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:24:48.019182 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:24:48.019192 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:24:48.019201 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:24:48.019211 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:24:48.019219 kernel: Using NULL legacy PIC Feb 9 19:24:48.019231 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:24:48.019239 kernel: Console: colour dummy device 80x25 Feb 9 19:24:48.019248 kernel: printk: console [tty1] enabled Feb 9 19:24:48.019257 kernel: printk: console [ttyS0] enabled Feb 9 19:24:48.019264 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:24:48.019276 kernel: ACPI: Core revision 20210730 Feb 9 19:24:48.019283 kernel: Failed to register legacy timer interrupt Feb 9 19:24:48.019294 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:24:48.019301 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:24:48.019311 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 9 19:24:48.019319 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:24:48.019326 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:24:48.019336 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:24:48.019344 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:24:48.019354 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:24:48.019363 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:24:48.019371 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:24:48.019381 kernel: RETBleed: Vulnerable Feb 9 19:24:48.019388 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:24:48.019396 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:24:48.019406 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:24:48.019413 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:24:48.019421 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:24:48.019430 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:24:48.019437 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:24:48.019449 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:24:48.019457 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:24:48.019464 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:24:48.019471 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:24:48.019478 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:24:48.019485 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:24:48.019495 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:24:48.019504 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:24:48.019511 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:24:48.019518 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:24:48.019525 kernel: LSM: Security Framework initializing Feb 9 19:24:48.019532 kernel: SELinux: Initializing. Feb 9 19:24:48.019544 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:24:48.019552 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:24:48.019560 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:24:48.019568 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:24:48.019577 kernel: signal: max sigframe size: 3632 Feb 9 19:24:48.019584 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:24:48.019595 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:24:48.019603 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:24:48.019613 kernel: x86: Booting SMP configuration: Feb 9 19:24:48.019623 kernel: .... node #0, CPUs: #1 Feb 9 19:24:48.019635 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:24:48.019645 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:24:48.019653 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:24:48.019664 kernel: smpboot: Max logical packages: 1 Feb 9 19:24:48.019671 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:24:48.019679 kernel: devtmpfs: initialized Feb 9 19:24:48.019688 kernel: x86/mm: Memory block size: 128MB Feb 9 19:24:48.019695 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:24:48.019708 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:24:48.019716 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:24:48.019726 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:24:48.019737 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:24:48.019746 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:24:48.019756 kernel: audit: type=2000 audit(1707506687.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:24:48.019768 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:24:48.019779 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:24:48.019788 kernel: cpuidle: using governor menu Feb 9 19:24:48.019801 kernel: ACPI: bus type PCI registered Feb 9 19:24:48.019810 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:24:48.019819 kernel: dca service started, version 1.12.1 Feb 9 19:24:48.019829 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:24:48.019838 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:24:48.019847 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:24:48.019856 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:24:48.019866 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:24:48.019873 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:24:48.019882 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:24:48.019891 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:24:48.019900 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:24:48.019910 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:24:48.019918 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:24:48.019926 kernel: ACPI: Interpreter enabled Feb 9 19:24:48.019936 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:24:48.019945 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:24:48.019953 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:24:48.019967 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:24:48.019976 kernel: iommu: Default domain type: Translated Feb 9 19:24:48.019984 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:24:48.019995 kernel: vgaarb: loaded Feb 9 19:24:48.020003 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:24:48.020012 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:24:48.020022 kernel: PTP clock support registered Feb 9 19:24:48.020031 kernel: Registered efivars operations Feb 9 19:24:48.020042 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:24:48.020051 kernel: PCI: System does not support PCI Feb 9 19:24:48.020065 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:24:48.020076 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:24:48.020088 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:24:48.020099 kernel: pnp: PnP ACPI init Feb 9 19:24:48.020108 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:24:48.020118 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:24:48.020127 kernel: NET: Registered PF_INET protocol family Feb 9 19:24:48.020147 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:24:48.020158 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:24:48.020169 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:24:48.020178 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:24:48.020186 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:24:48.020193 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:24:48.020204 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:24:48.020212 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:24:48.020221 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:24:48.020229 kernel: NET: Registered PF_XDP protocol family Feb 9 19:24:48.020241 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:24:48.020249 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:24:48.020259 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:24:48.020266 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:24:48.020275 kernel: Initialise system trusted keyrings Feb 9 19:24:48.020284 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:24:48.020294 kernel: Key type asymmetric registered Feb 9 19:24:48.020301 kernel: Asymmetric key parser 'x509' registered Feb 9 19:24:48.020311 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:24:48.020322 kernel: io scheduler mq-deadline registered Feb 9 19:24:48.020331 kernel: io scheduler kyber registered Feb 9 19:24:48.020338 kernel: io scheduler bfq registered Feb 9 19:24:48.020345 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:24:48.020356 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:24:48.020364 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:24:48.020374 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:24:48.020381 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:24:48.020512 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:24:48.020601 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:24:47 UTC (1707506687) Feb 9 19:24:48.020684 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:24:48.020696 kernel: fail to initialize ptp_kvm Feb 9 19:24:48.020704 kernel: intel_pstate: CPU model not supported Feb 9 19:24:48.020715 kernel: efifb: probing for efifb Feb 9 19:24:48.020722 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:24:48.020733 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:24:48.020742 kernel: efifb: scrolling: redraw Feb 9 19:24:48.020753 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:24:48.020760 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:24:48.020771 kernel: fb0: EFI VGA frame buffer device Feb 9 19:24:48.020778 kernel: pstore: Registered efi as persistent store backend Feb 9 19:24:48.020789 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:24:48.020796 kernel: Segment Routing with IPv6 Feb 9 19:24:48.020805 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:24:48.020813 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:24:48.020824 kernel: Key type dns_resolver registered Feb 9 19:24:48.020833 kernel: IPI shorthand broadcast: enabled Feb 9 19:24:48.020841 kernel: sched_clock: Marking stable (733127900, 21062200)->(930853700, -176663600) Feb 9 19:24:48.020851 kernel: registered taskstats version 1 Feb 9 19:24:48.020862 kernel: Loading compiled-in X.509 certificates Feb 9 19:24:48.020869 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:24:48.020877 kernel: Key type .fscrypt registered Feb 9 19:24:48.020887 kernel: Key type fscrypt-provisioning registered Feb 9 19:24:48.020897 kernel: pstore: Using crash dump compression: deflate Feb 9 19:24:48.020907 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:24:48.020914 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:24:48.020924 kernel: ima: No architecture policies found Feb 9 19:24:48.020934 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:24:48.020942 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:24:48.020951 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:24:48.020959 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:24:48.020970 kernel: Run /init as init process Feb 9 19:24:48.020977 kernel: with arguments: Feb 9 19:24:48.020985 kernel: /init Feb 9 19:24:48.020996 kernel: with environment: Feb 9 19:24:48.021006 kernel: HOME=/ Feb 9 19:24:48.021013 kernel: TERM=linux Feb 9 19:24:48.021021 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:24:48.021032 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:24:48.021045 systemd[1]: Detected virtualization microsoft. Feb 9 19:24:48.021053 systemd[1]: Detected architecture x86-64. Feb 9 19:24:48.021065 systemd[1]: Running in initrd. Feb 9 19:24:48.021074 systemd[1]: No hostname configured, using default hostname. Feb 9 19:24:48.021083 systemd[1]: Hostname set to . Feb 9 19:24:48.021091 systemd[1]: Initializing machine ID from random generator. Feb 9 19:24:48.021101 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:24:48.021109 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:24:48.021120 systemd[1]: Reached target cryptsetup.target. Feb 9 19:24:48.021127 systemd[1]: Reached target paths.target. Feb 9 19:24:48.021143 systemd[1]: Reached target slices.target. Feb 9 19:24:48.021156 systemd[1]: Reached target swap.target. Feb 9 19:24:48.021164 systemd[1]: Reached target timers.target. Feb 9 19:24:48.021172 systemd[1]: Listening on iscsid.socket. Feb 9 19:24:48.021183 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:24:48.021193 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:24:48.021202 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:24:48.021210 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:24:48.021223 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:24:48.021231 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:24:48.021241 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:24:48.021249 systemd[1]: Reached target sockets.target. Feb 9 19:24:48.021259 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:24:48.021267 systemd[1]: Finished network-cleanup.service. Feb 9 19:24:48.021278 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:24:48.021286 systemd[1]: Starting systemd-journald.service... Feb 9 19:24:48.021297 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:24:48.021309 systemd[1]: Starting systemd-resolved.service... Feb 9 19:24:48.021317 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:24:48.021325 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:24:48.021335 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:24:48.021346 kernel: audit: type=1130 audit(1707506688.019:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.021357 systemd-journald[183]: Journal started Feb 9 19:24:48.021403 systemd-journald[183]: Runtime Journal (/run/log/journal/ea873804b7e042b9a5984477003611f9) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:24:48.019000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.033795 systemd[1]: Started systemd-journald.service. Feb 9 19:24:48.036363 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:24:48.038505 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:24:48.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.059161 kernel: audit: type=1130 audit(1707506688.035:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.059339 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:24:48.064206 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:24:48.058000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.084200 kernel: audit: type=1130 audit(1707506688.058:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.071783 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:24:48.098942 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:24:48.098957 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:24:48.103495 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:24:48.098992 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:24:48.102423 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:24:48.073000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.132038 kernel: audit: type=1130 audit(1707506688.073:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.133382 systemd[1]: Started systemd-resolved.service. Feb 9 19:24:48.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.137236 systemd[1]: Reached target nss-lookup.target. Feb 9 19:24:48.175232 kernel: audit: type=1130 audit(1707506688.137:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.175258 kernel: Bridge firewalling registered Feb 9 19:24:48.175275 kernel: audit: type=1130 audit(1707506688.157:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.151786 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:24:48.155600 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:24:48.158506 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:24:48.190160 kernel: SCSI subsystem initialized Feb 9 19:24:48.190210 dracut-cmdline[200]: dracut-dracut-053 Feb 9 19:24:48.192701 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:24:48.217955 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:24:48.217988 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:24:48.222992 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:24:48.227291 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:24:48.230701 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:24:48.254683 kernel: audit: type=1130 audit(1707506688.234:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.235646 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:24:48.259311 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:24:48.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.275163 kernel: audit: type=1130 audit(1707506688.262:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.296157 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:24:48.309156 kernel: iscsi: registered transport (tcp) Feb 9 19:24:48.334368 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:24:48.334418 kernel: QLogic iSCSI HBA Driver Feb 9 19:24:48.363339 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:24:48.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.381189 kernel: audit: type=1130 audit(1707506688.365:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.379010 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:24:48.429155 kernel: raid6: avx512x4 gen() 18369 MB/s Feb 9 19:24:48.449151 kernel: raid6: avx512x4 xor() 8732 MB/s Feb 9 19:24:48.469149 kernel: raid6: avx512x2 gen() 18370 MB/s Feb 9 19:24:48.490154 kernel: raid6: avx512x2 xor() 30135 MB/s Feb 9 19:24:48.510149 kernel: raid6: avx512x1 gen() 18495 MB/s Feb 9 19:24:48.530149 kernel: raid6: avx512x1 xor() 27489 MB/s Feb 9 19:24:48.551151 kernel: raid6: avx2x4 gen() 18354 MB/s Feb 9 19:24:48.571148 kernel: raid6: avx2x4 xor() 7997 MB/s Feb 9 19:24:48.591147 kernel: raid6: avx2x2 gen() 18417 MB/s Feb 9 19:24:48.612150 kernel: raid6: avx2x2 xor() 22296 MB/s Feb 9 19:24:48.632147 kernel: raid6: avx2x1 gen() 13977 MB/s Feb 9 19:24:48.652147 kernel: raid6: avx2x1 xor() 19403 MB/s Feb 9 19:24:48.672148 kernel: raid6: sse2x4 gen() 11730 MB/s Feb 9 19:24:48.692147 kernel: raid6: sse2x4 xor() 7350 MB/s Feb 9 19:24:48.712146 kernel: raid6: sse2x2 gen() 12936 MB/s Feb 9 19:24:48.733148 kernel: raid6: sse2x2 xor() 7475 MB/s Feb 9 19:24:48.753146 kernel: raid6: sse2x1 gen() 11629 MB/s Feb 9 19:24:48.776564 kernel: raid6: sse2x1 xor() 5925 MB/s Feb 9 19:24:48.776594 kernel: raid6: using algorithm avx512x1 gen() 18495 MB/s Feb 9 19:24:48.776605 kernel: raid6: .... xor() 27489 MB/s, rmw enabled Feb 9 19:24:48.780468 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:24:48.799158 kernel: xor: automatically using best checksumming function avx Feb 9 19:24:48.895163 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:24:48.902808 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:24:48.904000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.906000 audit: BPF prog-id=7 op=LOAD Feb 9 19:24:48.906000 audit: BPF prog-id=8 op=LOAD Feb 9 19:24:48.907201 systemd[1]: Starting systemd-udevd.service... Feb 9 19:24:48.921872 systemd-udevd[384]: Using default interface naming scheme 'v252'. Feb 9 19:24:48.928591 systemd[1]: Started systemd-udevd.service. Feb 9 19:24:48.931667 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:24:48.930000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.951268 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Feb 9 19:24:48.979980 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:24:48.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:48.985476 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:24:49.026539 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:24:49.031000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.070156 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:24:49.089811 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:24:49.089858 kernel: AES CTR mode by8 optimization enabled Feb 9 19:24:49.100152 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:24:49.128159 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:24:49.144379 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:24:49.144426 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:24:49.147354 kernel: scsi host0: storvsc_host_t Feb 9 19:24:49.147448 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:24:49.152456 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:24:49.158749 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:24:49.158799 kernel: scsi host1: storvsc_host_t Feb 9 19:24:49.168157 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:24:49.183173 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:24:49.197438 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:24:49.197478 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:24:49.204097 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:24:49.204328 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:24:49.213152 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:24:49.225581 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:24:49.225751 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:24:49.226159 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:24:49.229157 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:24:49.234155 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:24:49.245148 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:24:49.250155 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:24:49.304036 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:24:49.312441 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by (udev-worker) (442) Feb 9 19:24:49.321550 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:24:49.326600 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:24:49.334086 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:24:49.341110 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:24:49.359078 kernel: hv_netvsc 000d3ad7-e8f1-000d-3ad7-e8f1000d3ad7 eth0: VF slot 1 added Feb 9 19:24:49.341936 systemd[1]: Starting disk-uuid.service... Feb 9 19:24:49.355371 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:24:49.363000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.363000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.355471 systemd[1]: Finished disk-uuid.service. Feb 9 19:24:49.398567 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:24:49.398586 kernel: hv_pci 4ebb952d-b92e-4d60-ae55-0a6cd50cc00b: PCI VMBus probing: Using version 0x10004 Feb 9 19:24:49.398723 kernel: hv_pci 4ebb952d-b92e-4d60-ae55-0a6cd50cc00b: PCI host bridge to bus b92e:00 Feb 9 19:24:49.399323 kernel: pci_bus b92e:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:24:49.400319 kernel: pci_bus b92e:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:24:49.401657 kernel: pci b92e:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:24:49.374641 systemd[1]: Starting verity-setup.service... Feb 9 19:24:49.416040 kernel: pci b92e:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:24:49.421657 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:24:49.430160 kernel: pci b92e:00:02.0: enabling Extended Tags Feb 9 19:24:49.442285 kernel: pci b92e:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at b92e:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:24:49.451336 kernel: pci_bus b92e:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:24:49.451489 kernel: pci b92e:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:24:49.459662 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:24:49.461347 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:24:49.467000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.464565 systemd[1]: Finished verity-setup.service. Feb 9 19:24:49.577911 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:24:49.571580 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:24:49.574042 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:24:49.574918 systemd[1]: Starting ignition-setup.service... Feb 9 19:24:49.579105 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:24:49.599154 kernel: mlx5_core b92e:00:02.0: firmware version: 14.30.1224 Feb 9 19:24:49.619579 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:49.619622 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:49.619635 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:49.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.670000 audit: BPF prog-id=9 op=LOAD Feb 9 19:24:49.667055 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:24:49.673836 systemd[1]: Starting systemd-networkd.service... Feb 9 19:24:49.707031 systemd-networkd[712]: lo: Link UP Feb 9 19:24:49.707044 systemd-networkd[712]: lo: Gained carrier Feb 9 19:24:49.719344 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:24:49.707533 systemd-networkd[712]: Enumeration completed Feb 9 19:24:49.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.707781 systemd-networkd[712]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:24:49.709760 systemd[1]: Started systemd-networkd.service. Feb 9 19:24:49.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.717625 systemd-networkd[712]: eth0: Link UP Feb 9 19:24:49.718757 systemd-networkd[712]: eth0: Gained carrier Feb 9 19:24:49.719485 systemd[1]: Reached target network.target. Feb 9 19:24:49.724382 systemd[1]: Starting iscsiuio.service... Feb 9 19:24:49.750979 iscsid[721]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:24:49.750979 iscsid[721]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 9 19:24:49.750979 iscsid[721]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:24:49.750979 iscsid[721]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:24:49.750979 iscsid[721]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:24:49.750979 iscsid[721]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:24:49.750979 iscsid[721]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:24:49.756000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.734242 systemd[1]: Started iscsiuio.service. Feb 9 19:24:49.739560 systemd[1]: Starting iscsid.service... Feb 9 19:24:49.751359 systemd[1]: Started iscsid.service. Feb 9 19:24:49.770063 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:24:49.783169 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:24:49.786084 systemd-networkd[712]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:24:49.793000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.793618 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:24:49.795649 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:24:49.797843 systemd[1]: Reached target remote-fs.target. Feb 9 19:24:49.800765 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:24:49.813483 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:24:49.815000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.821778 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:24:49.840156 kernel: mlx5_core b92e:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:24:49.842484 systemd[1]: Finished ignition-setup.service. Feb 9 19:24:49.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:49.843399 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:24:49.986399 kernel: mlx5_core b92e:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:24:49.986640 kernel: mlx5_core b92e:00:02.0: mlx5e_tc_post_act_init:40:(pid 188): firmware level support is missing Feb 9 19:24:49.998230 kernel: hv_netvsc 000d3ad7-e8f1-000d-3ad7-e8f1000d3ad7 eth0: VF registering: eth1 Feb 9 19:24:49.998447 kernel: mlx5_core b92e:00:02.0 eth1: joined to eth0 Feb 9 19:24:49.999321 ignition[747]: Ignition 2.14.0 Feb 9 19:24:49.999329 ignition[747]: Stage: fetch-offline Feb 9 19:24:49.999398 ignition[747]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:49.999440 ignition[747]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:50.009282 ignition[747]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:50.021157 kernel: mlx5_core b92e:00:02.0 enP47406s1: renamed from eth1 Feb 9 19:24:50.026431 systemd-networkd[712]: eth1: Interface name change detected, renamed to enP47406s1. Feb 9 19:24:50.027250 ignition[747]: parsed url from cmdline: "" Feb 9 19:24:50.027254 ignition[747]: no config URL provided Feb 9 19:24:50.027261 ignition[747]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:24:50.027270 ignition[747]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:24:50.027275 ignition[747]: failed to fetch config: resource requires networking Feb 9 19:24:50.039000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.036909 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:24:50.027458 ignition[747]: Ignition finished successfully Feb 9 19:24:50.039984 systemd[1]: Starting ignition-fetch.service... Feb 9 19:24:50.055941 ignition[755]: Ignition 2.14.0 Feb 9 19:24:50.055952 ignition[755]: Stage: fetch Feb 9 19:24:50.056073 ignition[755]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:50.056109 ignition[755]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:50.064870 ignition[755]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:50.068501 ignition[755]: parsed url from cmdline: "" Feb 9 19:24:50.068510 ignition[755]: no config URL provided Feb 9 19:24:50.068518 ignition[755]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:24:50.068529 ignition[755]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:24:50.068569 ignition[755]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:24:50.158944 systemd-networkd[712]: enP47406s1: Link UP Feb 9 19:24:50.161090 kernel: mlx5_core b92e:00:02.0 enP47406s1: Link up Feb 9 19:24:50.167812 ignition[755]: GET result: OK Feb 9 19:24:50.167842 ignition[755]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 9 19:24:50.231160 kernel: hv_netvsc 000d3ad7-e8f1-000d-3ad7-e8f1000d3ad7 eth0: Data path switched to VF: enP47406s1 Feb 9 19:24:50.405208 ignition[755]: opening config device: "/dev/sr0" Feb 9 19:24:50.405683 ignition[755]: getting drive status for "/dev/sr0" Feb 9 19:24:50.405843 ignition[755]: drive status: OK Feb 9 19:24:50.405894 ignition[755]: mounting config device Feb 9 19:24:50.405907 ignition[755]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure649715122" Feb 9 19:24:50.424157 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/10 00:00 (1000) Feb 9 19:24:50.424247 ignition[755]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure649715122" Feb 9 19:24:50.425177 ignition[755]: checking for config drive Feb 9 19:24:50.427352 ignition[755]: reading config Feb 9 19:24:50.427730 ignition[755]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure649715122" Feb 9 19:24:50.427850 ignition[755]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure649715122" Feb 9 19:24:50.427865 ignition[755]: config has been read from custom data Feb 9 19:24:50.427892 ignition[755]: parsing config with SHA512: 1a80b71c8179d8479350746def8251105645bb3803f009a35f0a6eff504e530ad059c0c2b330e4ca0798560057c605ddc01b195fd38cdaa56019c09b6395b278 Feb 9 19:24:50.447195 unknown[755]: fetched base config from "system" Feb 9 19:24:50.447206 unknown[755]: fetched base config from "system" Feb 9 19:24:50.447790 ignition[755]: fetch: fetch complete Feb 9 19:24:50.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.447214 unknown[755]: fetched user config from "azure" Feb 9 19:24:50.447796 ignition[755]: fetch: fetch passed Feb 9 19:24:50.451461 systemd[1]: Finished ignition-fetch.service. Feb 9 19:24:50.447832 ignition[755]: Ignition finished successfully Feb 9 19:24:50.454733 systemd[1]: Starting ignition-kargs.service... Feb 9 19:24:50.466269 ignition[762]: Ignition 2.14.0 Feb 9 19:24:50.466276 ignition[762]: Stage: kargs Feb 9 19:24:50.466379 ignition[762]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:50.466403 ignition[762]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:50.476996 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:50.478063 ignition[762]: kargs: kargs passed Feb 9 19:24:50.480482 systemd[1]: Finished ignition-kargs.service. Feb 9 19:24:50.483000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.478102 ignition[762]: Ignition finished successfully Feb 9 19:24:50.484421 systemd[1]: tmp-ignition\x2dazure649715122.mount: Deactivated successfully. Feb 9 19:24:50.485408 systemd[1]: Starting ignition-disks.service... Feb 9 19:24:50.494470 ignition[768]: Ignition 2.14.0 Feb 9 19:24:50.494477 ignition[768]: Stage: disks Feb 9 19:24:50.494566 ignition[768]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:50.494583 ignition[768]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:50.501606 systemd[1]: Finished ignition-disks.service. Feb 9 19:24:50.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.497031 ignition[768]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:50.504086 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:24:50.499061 ignition[768]: disks: disks passed Feb 9 19:24:50.508045 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:24:50.499099 ignition[768]: Ignition finished successfully Feb 9 19:24:50.510051 systemd[1]: Reached target local-fs.target. Feb 9 19:24:50.511901 systemd[1]: Reached target sysinit.target. Feb 9 19:24:50.513830 systemd[1]: Reached target basic.target. Feb 9 19:24:50.516416 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:24:50.627292 systemd-fsck[776]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:24:50.634194 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:24:50.638000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:50.639671 systemd[1]: Mounting sysroot.mount... Feb 9 19:24:50.658154 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:24:50.658954 systemd[1]: Mounted sysroot.mount. Feb 9 19:24:50.661168 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:24:50.700784 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:24:50.702515 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:24:50.702743 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:24:50.702788 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:24:50.709187 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:24:50.748346 systemd-networkd[712]: enP47406s1: Gained carrier Feb 9 19:24:50.876558 systemd-networkd[712]: eth0: Gained IPv6LL Feb 9 19:24:50.940429 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:24:50.946148 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:24:50.963870 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (786) Feb 9 19:24:50.963914 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:50.963927 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:50.967318 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:50.970764 initrd-setup-root[791]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:24:50.979043 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:24:50.990591 initrd-setup-root[817]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:24:50.994882 initrd-setup-root[825]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:24:51.000769 initrd-setup-root[833]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:24:51.641319 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:24:51.645000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.646759 systemd[1]: Starting ignition-mount.service... Feb 9 19:24:51.649645 systemd[1]: Starting sysroot-boot.service... Feb 9 19:24:51.657346 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:24:51.657468 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:24:51.707456 ignition[852]: INFO : Ignition 2.14.0 Feb 9 19:24:51.710253 ignition[852]: INFO : Stage: mount Feb 9 19:24:51.712435 ignition[852]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:51.712435 ignition[852]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:51.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:51.717249 systemd[1]: Finished sysroot-boot.service. Feb 9 19:24:51.727347 ignition[852]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:51.727347 ignition[852]: INFO : mount: mount passed Feb 9 19:24:51.727347 ignition[852]: INFO : Ignition finished successfully Feb 9 19:24:51.730422 systemd[1]: Finished ignition-mount.service. Feb 9 19:24:51.737000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:52.950324 coreos-metadata[785]: Feb 09 19:24:52.950 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:24:52.968297 coreos-metadata[785]: Feb 09 19:24:52.968 INFO Fetch successful Feb 9 19:24:53.001993 coreos-metadata[785]: Feb 09 19:24:53.001 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:24:53.017542 coreos-metadata[785]: Feb 09 19:24:53.017 INFO Fetch successful Feb 9 19:24:53.039193 coreos-metadata[785]: Feb 09 19:24:53.039 INFO wrote hostname ci-3510.3.2-a-75193cbbcb to /sysroot/etc/hostname Feb 9 19:24:53.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:53.041150 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:24:53.065569 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 19:24:53.065599 kernel: audit: type=1130 audit(1707506693.045:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:53.046724 systemd[1]: Starting ignition-files.service... Feb 9 19:24:53.068875 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:24:53.089675 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (864) Feb 9 19:24:53.089717 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:24:53.089728 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:24:53.097456 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:24:53.102659 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:24:53.116641 ignition[883]: INFO : Ignition 2.14.0 Feb 9 19:24:53.116641 ignition[883]: INFO : Stage: files Feb 9 19:24:53.120759 ignition[883]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:53.120759 ignition[883]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:53.130331 ignition[883]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:53.148969 ignition[883]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:24:53.152166 ignition[883]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:24:53.152166 ignition[883]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:24:53.199291 ignition[883]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:24:53.203748 ignition[883]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:24:53.207266 unknown[883]: wrote ssh authorized keys file for user: core Feb 9 19:24:53.209731 ignition[883]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:24:53.213269 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:24:53.218703 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 9 19:24:53.860276 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 9 19:24:54.071747 ignition[883]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 9 19:24:54.080205 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 9 19:24:54.080205 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:24:54.080205 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:24:54.576278 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:24:54.685885 ignition[883]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 9 19:24:54.694263 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 9 19:24:54.694263 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:24:54.702943 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:24:54.917184 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:24:55.186657 ignition[883]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 9 19:24:55.194451 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:24:55.194451 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:24:55.194451 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:24:55.313538 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:24:55.728610 ignition[883]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 9 19:24:55.737037 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:24:55.737037 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:24:55.737037 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:24:55.737037 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:24:55.737037 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:24:55.757905 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:24:55.757905 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:24:55.766406 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:24:55.770971 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:55.782679 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2116439962" Feb 9 19:24:55.792699 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (886) Feb 9 19:24:55.792726 ignition[883]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2116439962": device or resource busy Feb 9 19:24:55.792726 ignition[883]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2116439962", trying btrfs: device or resource busy Feb 9 19:24:55.792726 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2116439962" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2116439962" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2116439962" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2116439962" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:24:55.809282 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:24:55.805284 systemd[1]: mnt-oem2116439962.mount: Deactivated successfully. Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4199782141" Feb 9 19:24:55.828075 ignition[883]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4199782141": device or resource busy Feb 9 19:24:55.828075 ignition[883]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem4199782141", trying btrfs: device or resource busy Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4199782141" Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem4199782141" Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem4199782141" Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem4199782141" Feb 9 19:24:55.828075 ignition[883]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(14): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(14): op(15): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(14): op(15): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(14): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:24:55.828075 ignition[883]: INFO : files: op(16): [started] processing unit "prepare-critools.service" Feb 9 19:24:55.822091 systemd[1]: mnt-oem4199782141.mount: Deactivated successfully. Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(16): op(17): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(16): op(17): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(16): [finished] processing unit "prepare-critools.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(18): [started] setting preset to enabled for "waagent.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(18): [finished] setting preset to enabled for "waagent.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(19): [started] setting preset to enabled for "nvidia.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(19): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(1a): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(1a): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(1b): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: op(1b): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:24:55.835861 ignition[883]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:24:55.835861 ignition[883]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:24:55.835861 ignition[883]: INFO : files: files passed Feb 9 19:24:55.835861 ignition[883]: INFO : Ignition finished successfully Feb 9 19:24:55.969511 kernel: audit: type=1130 audit(1707506695.833:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:55.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:55.824280 systemd[1]: Finished ignition-files.service. Feb 9 19:24:55.928916 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:24:55.975935 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:24:55.976998 systemd[1]: Starting ignition-quench.service... Feb 9 19:24:57.043642 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:24:57.048000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.043783 systemd[1]: Finished ignition-quench.service. Feb 9 19:24:57.073653 kernel: audit: type=1130 audit(1707506697.048:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.073684 kernel: audit: type=1131 audit(1707506697.048:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.073751 initrd-setup-root-after-ignition[908]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:24:57.077541 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:24:57.100394 kernel: audit: type=1130 audit(1707506697.080:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.080000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.080456 systemd[1]: Reached target ignition-complete.target. Feb 9 19:24:57.095839 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:24:57.117380 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:24:57.117489 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:24:57.123000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.124030 systemd[1]: Reached target initrd-fs.target. Feb 9 19:24:57.151191 kernel: audit: type=1130 audit(1707506697.123:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.151223 kernel: audit: type=1131 audit(1707506697.123:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.123000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.148204 systemd[1]: Reached target initrd.target. Feb 9 19:24:57.151215 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:24:57.151997 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:24:57.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.166360 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:24:57.184546 kernel: audit: type=1130 audit(1707506697.168:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.169628 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:24:57.194827 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:24:57.199062 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:24:57.203578 systemd[1]: Stopped target timers.target. Feb 9 19:24:57.207357 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:24:57.209673 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:24:57.213000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.213728 systemd[1]: Stopped target initrd.target. Feb 9 19:24:57.228784 kernel: audit: type=1131 audit(1707506697.213:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.228963 systemd[1]: Stopped target basic.target. Feb 9 19:24:57.232910 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:24:57.237348 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:24:57.242281 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:24:57.246713 systemd[1]: Stopped target remote-fs.target. Feb 9 19:24:57.250822 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:24:57.255146 systemd[1]: Stopped target sysinit.target. Feb 9 19:24:57.259693 systemd[1]: Stopped target local-fs.target. Feb 9 19:24:57.263684 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:24:57.267800 systemd[1]: Stopped target swap.target. Feb 9 19:24:57.271685 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:24:57.274166 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:24:57.278000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.278319 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:24:57.293783 kernel: audit: type=1131 audit(1707506697.278:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.293843 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:24:57.296318 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:24:57.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.300402 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:24:57.303229 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:24:57.305000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.305960 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:24:57.310000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.308348 systemd[1]: Stopped ignition-files.service. Feb 9 19:24:57.310511 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:24:57.310639 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:24:57.319000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.329185 iscsid[721]: iscsid shutting down. Feb 9 19:24:57.322416 systemd[1]: Stopping ignition-mount.service... Feb 9 19:24:57.337351 ignition[921]: INFO : Ignition 2.14.0 Feb 9 19:24:57.337351 ignition[921]: INFO : Stage: umount Feb 9 19:24:57.337351 ignition[921]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:24:57.337351 ignition[921]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:24:57.337351 ignition[921]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:24:57.344000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.349000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.357000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.327191 systemd[1]: Stopping iscsid.service... Feb 9 19:24:57.366012 ignition[921]: INFO : umount: umount passed Feb 9 19:24:57.366012 ignition[921]: INFO : Ignition finished successfully Feb 9 19:24:57.365000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.365000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.370000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.374000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.331362 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:24:57.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.340846 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:24:57.340985 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:24:57.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.344955 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:24:57.345048 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:24:57.351654 systemd[1]: iscsid.service: Deactivated successfully. Feb 9 19:24:57.351758 systemd[1]: Stopped iscsid.service. Feb 9 19:24:57.357553 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:24:57.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.400000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.357640 systemd[1]: Stopped ignition-mount.service. Feb 9 19:24:57.363064 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:24:57.363159 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:24:57.426000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.366918 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:24:57.366965 systemd[1]: Stopped ignition-disks.service. Feb 9 19:24:57.370232 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:24:57.431000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.370271 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:24:57.435000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:24:57.374433 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:24:57.374478 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:24:57.378248 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:24:57.378296 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:24:57.384754 systemd[1]: Stopped target paths.target. Feb 9 19:24:57.389277 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:24:57.396189 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:24:57.398553 systemd[1]: Stopped target slices.target. Feb 9 19:24:57.400329 systemd[1]: Stopped target sockets.target. Feb 9 19:24:57.400418 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:24:57.400452 systemd[1]: Closed iscsid.socket. Feb 9 19:24:57.400823 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:24:57.400858 systemd[1]: Stopped ignition-setup.service. Feb 9 19:24:57.403447 systemd[1]: Stopping iscsiuio.service... Feb 9 19:24:57.403890 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:24:57.403971 systemd[1]: Stopped iscsiuio.service. Feb 9 19:24:57.404191 systemd[1]: Stopped target network.target. Feb 9 19:24:57.404536 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:24:57.404565 systemd[1]: Closed iscsiuio.socket. Feb 9 19:24:57.457000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.405067 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:24:57.405461 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:24:57.422276 systemd-networkd[712]: eth0: DHCPv6 lease lost Feb 9 19:24:57.424264 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:24:57.424351 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:24:57.431040 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:24:57.431157 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:24:57.435758 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:24:57.435790 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:24:57.441943 systemd[1]: Stopping network-cleanup.service... Feb 9 19:24:57.461471 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:24:57.467000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.467000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:24:57.461529 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:24:57.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.462317 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:24:57.462360 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:24:57.501840 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:24:57.501881 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:24:57.516000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.505778 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:24:57.511080 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:24:57.527000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.511163 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 9 19:24:57.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.514579 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:24:57.514707 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:24:57.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.519039 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:24:57.519077 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:24:57.522904 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:24:57.522941 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:24:57.555000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.525056 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:24:57.557000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.561000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.525101 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:24:57.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.527243 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:24:57.568000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.568000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.527282 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:24:57.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.534945 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:24:57.534994 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:24:57.540078 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:24:57.551206 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:24:57.590166 kernel: hv_netvsc 000d3ad7-e8f1-000d-3ad7-e8f1000d3ad7 eth0: Data path switched from VF: enP47406s1 Feb 9 19:24:57.551264 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:24:57.555515 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:24:57.555566 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:24:57.559824 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:24:57.559875 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:24:57.562994 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 9 19:24:57.563489 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:24:57.563575 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:24:57.610000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:24:57.566116 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:24:57.566229 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:24:57.570655 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:24:57.570706 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:24:57.609299 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:24:57.609379 systemd[1]: Stopped network-cleanup.service. Feb 9 19:24:57.613053 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:24:57.615982 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:24:57.629493 systemd[1]: Switching root. Feb 9 19:24:57.653417 systemd-journald[183]: Journal stopped Feb 9 19:25:13.221636 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 9 19:25:13.221680 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:25:13.221699 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:25:13.221715 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:25:13.221729 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:25:13.221745 kernel: SELinux: policy capability open_perms=1 Feb 9 19:25:13.221770 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:25:13.221785 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:25:13.221800 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:25:13.221817 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:25:13.221832 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:25:13.221851 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:25:13.221866 kernel: kauditd_printk_skb: 35 callbacks suppressed Feb 9 19:25:13.221885 kernel: audit: type=1403 audit(1707506700.458:81): auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:25:13.221910 systemd[1]: Successfully loaded SELinux policy in 300.216ms. Feb 9 19:25:13.221928 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 27.654ms. Feb 9 19:25:13.221948 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:25:13.221965 systemd[1]: Detected virtualization microsoft. Feb 9 19:25:13.221982 systemd[1]: Detected architecture x86-64. Feb 9 19:25:13.221995 systemd[1]: Detected first boot. Feb 9 19:25:13.222009 systemd[1]: Hostname set to . Feb 9 19:25:13.222024 systemd[1]: Initializing machine ID from random generator. Feb 9 19:25:13.222043 kernel: audit: type=1400 audit(1707506701.451:82): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:25:13.222058 kernel: audit: type=1400 audit(1707506701.467:83): avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:25:13.222073 kernel: audit: type=1400 audit(1707506701.467:84): avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:25:13.222092 kernel: audit: type=1334 audit(1707506701.491:85): prog-id=10 op=LOAD Feb 9 19:25:13.222107 kernel: audit: type=1334 audit(1707506701.491:86): prog-id=10 op=UNLOAD Feb 9 19:25:13.222121 kernel: audit: type=1334 audit(1707506701.496:87): prog-id=11 op=LOAD Feb 9 19:25:13.222133 kernel: audit: type=1334 audit(1707506701.496:88): prog-id=11 op=UNLOAD Feb 9 19:25:13.222157 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:25:13.222186 kernel: audit: type=1400 audit(1707506703.012:89): avc: denied { associate } for pid=954 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:25:13.222202 kernel: audit: type=1300 audit(1707506703.012:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=937 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:13.222217 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:25:13.222229 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:25:13.222240 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:25:13.222254 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:25:13.222265 kernel: kauditd_printk_skb: 7 callbacks suppressed Feb 9 19:25:13.222274 kernel: audit: type=1334 audit(1707506712.689:91): prog-id=12 op=LOAD Feb 9 19:25:13.222283 kernel: audit: type=1334 audit(1707506712.689:92): prog-id=3 op=UNLOAD Feb 9 19:25:13.222294 kernel: audit: type=1334 audit(1707506712.694:93): prog-id=13 op=LOAD Feb 9 19:25:13.222308 kernel: audit: type=1334 audit(1707506712.698:94): prog-id=14 op=LOAD Feb 9 19:25:13.222318 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 9 19:25:13.222327 kernel: audit: type=1334 audit(1707506712.698:95): prog-id=4 op=UNLOAD Feb 9 19:25:13.222340 kernel: audit: type=1334 audit(1707506712.698:96): prog-id=5 op=UNLOAD Feb 9 19:25:13.222350 kernel: audit: type=1131 audit(1707506712.699:97): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.222359 systemd[1]: Stopped initrd-switch-root.service. Feb 9 19:25:13.222369 kernel: audit: type=1334 audit(1707506712.742:98): prog-id=12 op=UNLOAD Feb 9 19:25:13.222382 kernel: audit: type=1130 audit(1707506712.749:99): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.222392 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 9 19:25:13.222401 kernel: audit: type=1131 audit(1707506712.749:100): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.222414 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:25:13.222424 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:25:13.222440 systemd[1]: Created slice system-getty.slice. Feb 9 19:25:13.222452 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:25:13.222467 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:25:13.222480 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:25:13.222493 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:25:13.222507 systemd[1]: Created slice user.slice. Feb 9 19:25:13.222519 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:25:13.222529 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:25:13.222539 systemd[1]: Set up automount boot.automount. Feb 9 19:25:13.222551 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:25:13.222562 systemd[1]: Stopped target initrd-switch-root.target. Feb 9 19:25:13.222576 systemd[1]: Stopped target initrd-fs.target. Feb 9 19:25:13.222588 systemd[1]: Stopped target initrd-root-fs.target. Feb 9 19:25:13.222599 systemd[1]: Reached target integritysetup.target. Feb 9 19:25:13.222612 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:25:13.222621 systemd[1]: Reached target remote-fs.target. Feb 9 19:25:13.222634 systemd[1]: Reached target slices.target. Feb 9 19:25:13.222644 systemd[1]: Reached target swap.target. Feb 9 19:25:13.222656 systemd[1]: Reached target torcx.target. Feb 9 19:25:13.222668 systemd[1]: Reached target veritysetup.target. Feb 9 19:25:13.222680 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:25:13.222692 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:25:13.222702 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:25:13.222715 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:25:13.222730 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:25:13.222740 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:25:13.222752 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:25:13.222764 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:25:13.222775 systemd[1]: Mounting media.mount... Feb 9 19:25:13.222788 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:25:13.222799 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:25:13.222810 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:25:13.222820 systemd[1]: Mounting tmp.mount... Feb 9 19:25:13.222835 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:25:13.222848 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:25:13.222858 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:25:13.222870 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:25:13.222883 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:25:13.222893 systemd[1]: Starting modprobe@drm.service... Feb 9 19:25:13.222906 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:25:13.222915 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:25:13.222928 systemd[1]: Starting modprobe@loop.service... Feb 9 19:25:13.222942 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:25:13.223314 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 9 19:25:13.223334 systemd[1]: Stopped systemd-fsck-root.service. Feb 9 19:25:13.223346 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 9 19:25:13.223358 systemd[1]: Stopped systemd-fsck-usr.service. Feb 9 19:25:13.223369 systemd[1]: Stopped systemd-journald.service. Feb 9 19:25:13.223381 systemd[1]: Starting systemd-journald.service... Feb 9 19:25:13.223393 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:25:13.223406 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:25:13.223416 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:25:13.223429 kernel: loop: module loaded Feb 9 19:25:13.223440 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:25:13.223453 systemd[1]: verity-setup.service: Deactivated successfully. Feb 9 19:25:13.223462 systemd[1]: Stopped verity-setup.service. Feb 9 19:25:13.223473 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:25:13.223485 kernel: fuse: init (API version 7.34) Feb 9 19:25:13.223497 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:25:13.223509 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:25:13.223522 systemd[1]: Mounted media.mount. Feb 9 19:25:13.223535 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:25:13.223545 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:25:13.223558 systemd[1]: Mounted tmp.mount. Feb 9 19:25:13.223570 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:25:13.223581 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:25:13.223600 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:25:13.223614 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:25:13.223628 systemd-journald[1035]: Journal started Feb 9 19:25:13.223680 systemd-journald[1035]: Runtime Journal (/run/log/journal/47a517b827a444fa9e32501193178ee4) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:25:00.458000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 9 19:25:01.451000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:25:01.467000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:25:01.467000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 9 19:25:01.491000 audit: BPF prog-id=10 op=LOAD Feb 9 19:25:01.491000 audit: BPF prog-id=10 op=UNLOAD Feb 9 19:25:01.496000 audit: BPF prog-id=11 op=LOAD Feb 9 19:25:01.496000 audit: BPF prog-id=11 op=UNLOAD Feb 9 19:25:03.012000 audit[954]: AVC avc: denied { associate } for pid=954 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:25:03.012000 audit[954]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=937 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:03.012000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:25:03.019000 audit[954]: AVC avc: denied { associate } for pid=954 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:25:03.019000 audit[954]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=937 pid=954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:03.019000 audit: CWD cwd="/" Feb 9 19:25:03.019000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:03.019000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:03.019000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:25:12.689000 audit: BPF prog-id=12 op=LOAD Feb 9 19:25:12.689000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:25:12.694000 audit: BPF prog-id=13 op=LOAD Feb 9 19:25:12.698000 audit: BPF prog-id=14 op=LOAD Feb 9 19:25:12.698000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:25:12.698000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:25:12.699000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:12.742000 audit: BPF prog-id=12 op=UNLOAD Feb 9 19:25:12.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:12.749000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.079000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.095000 audit: BPF prog-id=15 op=LOAD Feb 9 19:25:13.095000 audit: BPF prog-id=16 op=LOAD Feb 9 19:25:13.095000 audit: BPF prog-id=17 op=LOAD Feb 9 19:25:13.095000 audit: BPF prog-id=13 op=UNLOAD Feb 9 19:25:13.095000 audit: BPF prog-id=14 op=UNLOAD Feb 9 19:25:13.160000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.215000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.218000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:25:13.218000 audit[1035]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=6 a1=7ffe331274d0 a2=4000 a3=7ffe3312756c items=0 ppid=1 pid=1035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:13.218000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:25:02.992490 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:25:12.687365 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:25:02.993453 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:25:12.699705 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 9 19:25:02.993475 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:25:02.993512 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 9 19:25:02.993524 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 9 19:25:02.993575 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 9 19:25:02.993590 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 9 19:25:02.993797 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 9 19:25:02.993846 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 9 19:25:02.993865 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 9 19:25:02.994287 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 9 19:25:02.994326 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 9 19:25:02.994346 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 9 19:25:02.994363 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 9 19:25:13.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:02.994382 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 9 19:25:02.994397 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:02Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 9 19:25:11.535006 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:25:11.535245 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:25:11.535364 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:25:11.535526 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 9 19:25:11.535572 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 9 19:25:11.535629 /usr/lib/systemd/system-generators/torcx-generator[954]: time="2024-02-09T19:25:11Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 9 19:25:13.232155 systemd[1]: Started systemd-journald.service. Feb 9 19:25:13.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.234794 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:25:13.234938 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:25:13.236000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.237392 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:25:13.237531 systemd[1]: Finished modprobe@drm.service. Feb 9 19:25:13.239000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.239985 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:25:13.240155 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:25:13.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.242000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.242888 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:25:13.243019 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:25:13.244000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.244000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.245405 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:25:13.245553 systemd[1]: Finished modprobe@loop.service. Feb 9 19:25:13.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.247000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.247929 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:25:13.250000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.250748 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:25:13.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.253493 systemd[1]: Reached target network-pre.target. Feb 9 19:25:13.256779 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:25:13.260319 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:25:13.263245 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:25:13.323156 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:25:13.327130 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:25:13.329598 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:25:13.330635 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:25:13.332756 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:25:13.333925 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:25:13.338628 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:25:13.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.341332 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:25:13.343826 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:25:13.347400 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:25:13.365737 systemd-journald[1035]: Time spent on flushing to /var/log/journal/47a517b827a444fa9e32501193178ee4 is 16.982ms for 1173 entries. Feb 9 19:25:13.365737 systemd-journald[1035]: System Journal (/var/log/journal/47a517b827a444fa9e32501193178ee4) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:25:13.476106 systemd-journald[1035]: Received client request to flush runtime journal. Feb 9 19:25:13.373000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.408000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:13.371841 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:25:13.478238 udevadm[1078]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Feb 9 19:25:13.374191 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:25:13.406038 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:25:13.409246 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:25:13.457690 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:25:13.477045 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:25:13.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:14.090074 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:25:14.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:14.093829 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:25:14.504717 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:25:14.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:15.700060 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:25:15.702000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:15.702000 audit: BPF prog-id=18 op=LOAD Feb 9 19:25:15.702000 audit: BPF prog-id=19 op=LOAD Feb 9 19:25:15.702000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:25:15.703000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:25:15.703843 systemd[1]: Starting systemd-udevd.service... Feb 9 19:25:15.722751 systemd-udevd[1082]: Using default interface naming scheme 'v252'. Feb 9 19:25:16.035383 systemd[1]: Started systemd-udevd.service. Feb 9 19:25:16.037000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:16.039000 audit: BPF prog-id=20 op=LOAD Feb 9 19:25:16.040889 systemd[1]: Starting systemd-networkd.service... Feb 9 19:25:16.076233 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 9 19:25:16.139976 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:25:16.140078 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:25:16.157000 audit: BPF prog-id=21 op=LOAD Feb 9 19:25:16.157000 audit: BPF prog-id=22 op=LOAD Feb 9 19:25:16.157000 audit: BPF prog-id=23 op=LOAD Feb 9 19:25:16.140000 audit[1095]: AVC avc: denied { confidentiality } for pid=1095 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:25:16.158463 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:25:16.170181 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:25:16.178125 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:25:16.178202 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:25:16.178233 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:25:16.588244 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:25:16.607283 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:25:16.617712 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:25:16.617774 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:25:16.140000 audit[1095]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=563cfc5bbaa0 a1=f884 a2=7f0cb2fa3bc5 a3=5 items=12 ppid=1082 pid=1095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:16.140000 audit: CWD cwd="/" Feb 9 19:25:16.140000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.622443 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:25:16.140000 audit: PATH item=1 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=2 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=3 name=(null) inode=15574 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=4 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=5 name=(null) inode=15575 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=6 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=7 name=(null) inode=15576 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=8 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=9 name=(null) inode=15577 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=10 name=(null) inode=15573 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PATH item=11 name=(null) inode=15578 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:25:16.140000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:25:16.623647 systemd[1]: Started systemd-userdbd.service. Feb 9 19:25:16.625000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:16.632321 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:25:16.632384 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:25:16.856261 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:25:16.905298 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1096) Feb 9 19:25:16.941900 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:25:16.945697 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:25:16.947000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:16.949812 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:25:17.090009 systemd-networkd[1088]: lo: Link UP Feb 9 19:25:17.090024 systemd-networkd[1088]: lo: Gained carrier Feb 9 19:25:17.090631 systemd-networkd[1088]: Enumeration completed Feb 9 19:25:17.090760 systemd[1]: Started systemd-networkd.service. Feb 9 19:25:17.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:17.095021 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:25:17.126755 systemd-networkd[1088]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:25:17.183247 kernel: mlx5_core b92e:00:02.0 enP47406s1: Link up Feb 9 19:25:17.223268 kernel: hv_netvsc 000d3ad7-e8f1-000d-3ad7-e8f1000d3ad7 eth0: Data path switched to VF: enP47406s1 Feb 9 19:25:17.224738 systemd-networkd[1088]: enP47406s1: Link UP Feb 9 19:25:17.224997 systemd-networkd[1088]: eth0: Link UP Feb 9 19:25:17.225080 systemd-networkd[1088]: eth0: Gained carrier Feb 9 19:25:17.228474 systemd-networkd[1088]: enP47406s1: Gained carrier Feb 9 19:25:17.262440 systemd-networkd[1088]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:25:17.401682 lvm[1159]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:25:17.430461 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:25:17.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:17.433180 systemd[1]: Reached target cryptsetup.target. Feb 9 19:25:17.436691 systemd[1]: Starting lvm2-activation.service... Feb 9 19:25:17.442803 lvm[1161]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:25:17.461002 systemd[1]: Finished lvm2-activation.service. Feb 9 19:25:17.462000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:17.463518 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:25:17.465704 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:25:17.465733 systemd[1]: Reached target local-fs.target. Feb 9 19:25:17.467805 systemd[1]: Reached target machines.target. Feb 9 19:25:17.470882 systemd[1]: Starting ldconfig.service... Feb 9 19:25:17.473141 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:25:17.473256 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:17.474382 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:25:17.477509 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:25:17.481373 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:25:17.483672 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:25:17.483776 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:25:17.484803 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:25:17.512165 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:25:17.541829 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1163 (bootctl) Feb 9 19:25:17.543555 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:25:17.557570 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:25:17.651654 systemd-tmpfiles[1166]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:25:17.869416 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:25:17.871000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.127685 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:25:18.128353 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:25:18.129000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.133609 kernel: kauditd_printk_skb: 69 callbacks suppressed Feb 9 19:25:18.133672 kernel: audit: type=1130 audit(1707506718.129:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.478476 systemd-networkd[1088]: eth0: Gained IPv6LL Feb 9 19:25:18.485192 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:25:18.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:18.499290 kernel: audit: type=1130 audit(1707506718.487:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.244695 systemd-fsck[1171]: fsck.fat 4.2 (2021-01-31) Feb 9 19:25:19.244695 systemd-fsck[1171]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:25:19.244530 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:25:19.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.249555 systemd[1]: Mounting boot.mount... Feb 9 19:25:19.262939 kernel: audit: type=1130 audit(1707506719.246:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.273817 systemd[1]: Mounted boot.mount. Feb 9 19:25:19.289250 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:25:19.290000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.303311 kernel: audit: type=1130 audit(1707506719.290:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.515895 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:25:19.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.519508 systemd[1]: Starting audit-rules.service... Feb 9 19:25:19.528272 kernel: audit: type=1130 audit(1707506719.516:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.531363 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:25:19.535165 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:25:19.537000 audit: BPF prog-id=24 op=LOAD Feb 9 19:25:19.539739 systemd[1]: Starting systemd-resolved.service... Feb 9 19:25:19.545337 kernel: audit: type=1334 audit(1707506719.537:158): prog-id=24 op=LOAD Feb 9 19:25:19.544000 audit: BPF prog-id=25 op=LOAD Feb 9 19:25:19.546434 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:25:19.552472 kernel: audit: type=1334 audit(1707506719.544:159): prog-id=25 op=LOAD Feb 9 19:25:19.552976 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:25:19.581000 audit[1183]: SYSTEM_BOOT pid=1183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.583682 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:25:19.597240 kernel: audit: type=1127 audit(1707506719.581:160): pid=1183 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.605169 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:25:19.615869 kernel: audit: type=1130 audit(1707506719.598:161): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.617554 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:25:19.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.635309 kernel: audit: type=1130 audit(1707506719.616:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.656841 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:25:19.658000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.659457 systemd[1]: Reached target time-set.target. Feb 9 19:25:19.687440 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:25:19.689000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:25:19.811294 systemd-resolved[1181]: Positive Trust Anchors: Feb 9 19:25:19.811311 systemd-resolved[1181]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:25:19.811364 systemd-resolved[1181]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:25:19.921000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:25:19.921000 audit[1198]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd2fe32830 a2=420 a3=0 items=0 ppid=1177 pid=1198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:25:19.921000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:25:19.922594 augenrules[1198]: No rules Feb 9 19:25:19.923013 systemd[1]: Finished audit-rules.service. Feb 9 19:25:19.929542 systemd-timesyncd[1182]: Contacted time server 85.91.1.164:123 (0.flatcar.pool.ntp.org). Feb 9 19:25:19.929610 systemd-timesyncd[1182]: Initial clock synchronization to Fri 2024-02-09 19:25:19.932464 UTC. Feb 9 19:25:20.033997 systemd-resolved[1181]: Using system hostname 'ci-3510.3.2-a-75193cbbcb'. Feb 9 19:25:20.035864 systemd[1]: Started systemd-resolved.service. Feb 9 19:25:20.038450 systemd[1]: Reached target network.target. Feb 9 19:25:20.040627 systemd[1]: Reached target network-online.target. Feb 9 19:25:20.042994 systemd[1]: Reached target nss-lookup.target. Feb 9 19:25:26.952255 ldconfig[1162]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:25:26.963426 systemd[1]: Finished ldconfig.service. Feb 9 19:25:26.967032 systemd[1]: Starting systemd-update-done.service... Feb 9 19:25:26.975463 systemd[1]: Finished systemd-update-done.service. Feb 9 19:25:26.978035 systemd[1]: Reached target sysinit.target. Feb 9 19:25:26.980300 systemd[1]: Started motdgen.path. Feb 9 19:25:26.982279 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:25:26.985437 systemd[1]: Started logrotate.timer. Feb 9 19:25:26.987558 systemd[1]: Started mdadm.timer. Feb 9 19:25:26.989184 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:25:26.991287 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:25:26.991320 systemd[1]: Reached target paths.target. Feb 9 19:25:26.993194 systemd[1]: Reached target timers.target. Feb 9 19:25:26.995549 systemd[1]: Listening on dbus.socket. Feb 9 19:25:26.998158 systemd[1]: Starting docker.socket... Feb 9 19:25:27.018816 systemd[1]: Listening on sshd.socket. Feb 9 19:25:27.021473 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:27.021996 systemd[1]: Listening on docker.socket. Feb 9 19:25:27.024300 systemd[1]: Reached target sockets.target. Feb 9 19:25:27.026367 systemd[1]: Reached target basic.target. Feb 9 19:25:27.028418 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:25:27.028452 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:25:27.029373 systemd[1]: Starting containerd.service... Feb 9 19:25:27.032182 systemd[1]: Starting dbus.service... Feb 9 19:25:27.034733 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:25:27.037886 systemd[1]: Starting extend-filesystems.service... Feb 9 19:25:27.040248 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:25:27.041507 systemd[1]: Starting motdgen.service... Feb 9 19:25:27.045016 systemd[1]: Started nvidia.service. Feb 9 19:25:27.048017 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:25:27.051039 systemd[1]: Starting prepare-critools.service... Feb 9 19:25:27.053945 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:25:27.057272 systemd[1]: Starting sshd-keygen.service... Feb 9 19:25:27.062311 systemd[1]: Starting systemd-logind.service... Feb 9 19:25:27.064572 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:25:27.064656 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:25:27.065157 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 9 19:25:27.066020 systemd[1]: Starting update-engine.service... Feb 9 19:25:27.069113 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:25:27.075404 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:25:27.075708 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:25:27.154256 jq[1221]: true Feb 9 19:25:27.154558 jq[1208]: false Feb 9 19:25:27.155568 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:25:27.155797 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:25:27.178487 jq[1234]: true Feb 9 19:25:27.198438 systemd-logind[1218]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:25:27.198885 systemd-logind[1218]: New seat seat0. Feb 9 19:25:27.214785 extend-filesystems[1209]: Found sda Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda1 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda2 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda3 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found usr Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda4 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda6 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda7 Feb 9 19:25:27.217626 extend-filesystems[1209]: Found sda9 Feb 9 19:25:27.217626 extend-filesystems[1209]: Checking size of /dev/sda9 Feb 9 19:25:27.243818 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:25:27.243996 systemd[1]: Finished motdgen.service. Feb 9 19:25:27.249256 tar[1224]: ./ Feb 9 19:25:27.249256 tar[1224]: ./loopback Feb 9 19:25:27.255291 tar[1226]: crictl Feb 9 19:25:27.308627 tar[1224]: ./bandwidth Feb 9 19:25:27.328586 extend-filesystems[1209]: Old size kept for /dev/sda9 Feb 9 19:25:27.337504 extend-filesystems[1209]: Found sr0 Feb 9 19:25:27.329374 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:25:27.329533 systemd[1]: Finished extend-filesystems.service. Feb 9 19:25:27.380134 env[1263]: time="2024-02-09T19:25:27.380086206Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:25:27.407493 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:25:27.420767 tar[1224]: ./ptp Feb 9 19:25:27.424649 dbus-daemon[1207]: [system] SELinux support is enabled Feb 9 19:25:27.424793 systemd[1]: Started dbus.service. Feb 9 19:25:27.429884 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:25:27.429913 systemd[1]: Reached target system-config.target. Feb 9 19:25:27.430647 dbus-daemon[1207]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 9 19:25:27.432757 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:25:27.432784 systemd[1]: Reached target user-config.target. Feb 9 19:25:27.435337 bash[1249]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:25:27.435474 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:25:27.438428 systemd[1]: Started systemd-logind.service. Feb 9 19:25:27.481005 env[1263]: time="2024-02-09T19:25:27.480923849Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:25:27.481446 env[1263]: time="2024-02-09T19:25:27.481421009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484032 env[1263]: time="2024-02-09T19:25:27.483993516Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484104 env[1263]: time="2024-02-09T19:25:27.484033921Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484341 env[1263]: time="2024-02-09T19:25:27.484311954Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484412 env[1263]: time="2024-02-09T19:25:27.484342658Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484412 env[1263]: time="2024-02-09T19:25:27.484361860Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:25:27.484412 env[1263]: time="2024-02-09T19:25:27.484376162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484524 env[1263]: time="2024-02-09T19:25:27.484474673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484758 env[1263]: time="2024-02-09T19:25:27.484732504Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:25:27.484954 env[1263]: time="2024-02-09T19:25:27.484928127Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:25:27.485013 env[1263]: time="2024-02-09T19:25:27.484956431Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:25:27.485054 env[1263]: time="2024-02-09T19:25:27.485023939Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:25:27.485054 env[1263]: time="2024-02-09T19:25:27.485040441Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551279452Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551323357Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551342760Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551384565Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551404667Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551424369Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551441972Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551461474Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551480176Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551497678Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551524681Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551540483Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551663098Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:25:27.553246 env[1263]: time="2024-02-09T19:25:27.551749808Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552041543Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552076247Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552095850Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552169558Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552187661Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552205463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552240567Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552258669Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552276471Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552295173Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552309175Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552326277Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552472195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552491297Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.553793 env[1263]: time="2024-02-09T19:25:27.552508399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.554321 env[1263]: time="2024-02-09T19:25:27.552537302Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:25:27.554321 env[1263]: time="2024-02-09T19:25:27.552560205Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:25:27.554321 env[1263]: time="2024-02-09T19:25:27.552574807Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:25:27.554321 env[1263]: time="2024-02-09T19:25:27.552599210Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:25:27.554321 env[1263]: time="2024-02-09T19:25:27.552640115Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:25:27.555084 env[1263]: time="2024-02-09T19:25:27.552882144Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:25:27.555084 env[1263]: time="2024-02-09T19:25:27.552957453Z" level=info msg="Connect containerd service" Feb 9 19:25:27.555084 env[1263]: time="2024-02-09T19:25:27.553001958Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.555869400Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556000516Z" level=info msg="Start subscribing containerd event" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556047222Z" level=info msg="Start recovering state" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556118430Z" level=info msg="Start event monitor" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556131332Z" level=info msg="Start snapshots syncer" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556143233Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556152834Z" level=info msg="Start streaming server" Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556567684Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556649193Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:25:27.609022 env[1263]: time="2024-02-09T19:25:27.556740004Z" level=info msg="containerd successfully booted in 0.209983s" Feb 9 19:25:27.609386 tar[1224]: ./vlan Feb 9 19:25:27.556799 systemd[1]: Started containerd.service. Feb 9 19:25:27.697364 tar[1224]: ./host-device Feb 9 19:25:27.783207 tar[1224]: ./tuning Feb 9 19:25:27.834684 tar[1224]: ./vrf Feb 9 19:25:27.901588 tar[1224]: ./sbr Feb 9 19:25:27.977340 tar[1224]: ./tap Feb 9 19:25:28.069203 tar[1224]: ./dhcp Feb 9 19:25:28.083654 systemd[1]: Finished prepare-critools.service. Feb 9 19:25:28.192932 tar[1224]: ./static Feb 9 19:25:28.225743 tar[1224]: ./firewall Feb 9 19:25:28.257800 update_engine[1220]: I0209 19:25:28.257382 1220 main.cc:92] Flatcar Update Engine starting Feb 9 19:25:28.278458 tar[1224]: ./macvlan Feb 9 19:25:28.323067 tar[1224]: ./dummy Feb 9 19:25:28.331400 systemd[1]: Started update-engine.service. Feb 9 19:25:28.336687 systemd[1]: Started locksmithd.service. Feb 9 19:25:28.339816 update_engine[1220]: I0209 19:25:28.339783 1220 update_check_scheduler.cc:74] Next update check in 7m53s Feb 9 19:25:28.377334 tar[1224]: ./bridge Feb 9 19:25:28.425063 tar[1224]: ./ipvlan Feb 9 19:25:28.469576 tar[1224]: ./portmap Feb 9 19:25:28.511200 tar[1224]: ./host-local Feb 9 19:25:28.619005 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:25:29.844231 sshd_keygen[1230]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:25:29.863842 systemd[1]: Finished sshd-keygen.service. Feb 9 19:25:29.867898 systemd[1]: Starting issuegen.service... Feb 9 19:25:29.871211 systemd[1]: Started waagent.service. Feb 9 19:25:29.876752 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:25:29.876884 systemd[1]: Finished issuegen.service. Feb 9 19:25:29.880502 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:25:29.887682 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:25:29.891466 systemd[1]: Started getty@tty1.service. Feb 9 19:25:29.894958 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:25:29.897478 systemd[1]: Reached target getty.target. Feb 9 19:25:29.899435 systemd[1]: Reached target multi-user.target. Feb 9 19:25:29.902882 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:25:29.910742 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:25:29.910915 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:25:29.914303 systemd[1]: Startup finished in 383ms (firmware) + 1.875s (loader) + 893ms (kernel) + 12.168s (initrd) + 29.632s (userspace) = 44.953s. Feb 9 19:25:30.141037 locksmithd[1310]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:25:30.364383 login[1330]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:25:30.365755 login[1331]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:25:30.401734 systemd[1]: Created slice user-500.slice. Feb 9 19:25:30.403109 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:25:30.407061 systemd-logind[1218]: New session 1 of user core. Feb 9 19:25:30.412641 systemd-logind[1218]: New session 2 of user core. Feb 9 19:25:30.415952 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:25:30.417657 systemd[1]: Starting user@500.service... Feb 9 19:25:30.436411 (systemd)[1334]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:25:30.662502 systemd[1334]: Queued start job for default target default.target. Feb 9 19:25:30.663127 systemd[1334]: Reached target paths.target. Feb 9 19:25:30.663158 systemd[1334]: Reached target sockets.target. Feb 9 19:25:30.663174 systemd[1334]: Reached target timers.target. Feb 9 19:25:30.663189 systemd[1334]: Reached target basic.target. Feb 9 19:25:30.663317 systemd[1]: Started user@500.service. Feb 9 19:25:30.664477 systemd[1]: Started session-1.scope. Feb 9 19:25:30.665325 systemd[1]: Started session-2.scope. Feb 9 19:25:30.666312 systemd[1334]: Reached target default.target. Feb 9 19:25:30.666496 systemd[1334]: Startup finished in 224ms. Feb 9 19:25:38.029342 waagent[1325]: 2024-02-09T19:25:38.029210Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:25:38.033780 waagent[1325]: 2024-02-09T19:25:38.033703Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:25:38.036503 waagent[1325]: 2024-02-09T19:25:38.036442Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:25:38.039279 waagent[1325]: 2024-02-09T19:25:38.039197Z INFO Daemon Daemon Run daemon Feb 9 19:25:38.042112 waagent[1325]: 2024-02-09T19:25:38.042048Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:25:38.054497 waagent[1325]: 2024-02-09T19:25:38.054380Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:25:38.062909 waagent[1325]: 2024-02-09T19:25:38.062803Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:25:38.067732 waagent[1325]: 2024-02-09T19:25:38.067669Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:25:38.070353 waagent[1325]: 2024-02-09T19:25:38.070291Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:25:38.073727 waagent[1325]: 2024-02-09T19:25:38.073668Z INFO Daemon Daemon Activate resource disk Feb 9 19:25:38.076182 waagent[1325]: 2024-02-09T19:25:38.076121Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:25:38.086138 waagent[1325]: 2024-02-09T19:25:38.086072Z INFO Daemon Daemon Found device: None Feb 9 19:25:38.088825 waagent[1325]: 2024-02-09T19:25:38.088764Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:25:38.092823 waagent[1325]: 2024-02-09T19:25:38.092763Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:25:38.098854 waagent[1325]: 2024-02-09T19:25:38.098792Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:25:38.101910 waagent[1325]: 2024-02-09T19:25:38.101847Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:25:38.112417 waagent[1325]: 2024-02-09T19:25:38.112293Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:25:38.119422 waagent[1325]: 2024-02-09T19:25:38.119318Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:25:38.127451 waagent[1325]: 2024-02-09T19:25:38.119694Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:25:38.127451 waagent[1325]: 2024-02-09T19:25:38.120521Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:25:38.160517 waagent[1325]: 2024-02-09T19:25:38.160403Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:25:38.279495 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:25:38.301695 waagent[1325]: 2024-02-09T19:25:38.301568Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:25:38.316281 waagent[1325]: 2024-02-09T19:25:38.302118Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:25:38.316281 waagent[1325]: 2024-02-09T19:25:38.303269Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:25:38.316281 waagent[1325]: 2024-02-09T19:25:38.304150Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:25:38.316281 waagent[1325]: 2024-02-09T19:25:38.305300Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:25:38.316281 waagent[1325]: 2024-02-09T19:25:38.306086Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:25:38.475043 waagent[1325]: 2024-02-09T19:25:38.474966Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:25:38.479290 waagent[1325]: 2024-02-09T19:25:38.479241Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:25:38.482210 waagent[1325]: 2024-02-09T19:25:38.482151Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:25:39.701815 waagent[1325]: 2024-02-09T19:25:39.701660Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:25:39.712414 waagent[1325]: 2024-02-09T19:25:39.712338Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:25:39.717518 waagent[1325]: 2024-02-09T19:25:39.712723Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:25:39.796693 waagent[1325]: 2024-02-09T19:25:39.796572Z INFO Daemon Daemon Found private key matching thumbprint AEAFE117DDFFFA04998AC68321E89A1E38363BC8 Feb 9 19:25:39.801583 waagent[1325]: 2024-02-09T19:25:39.801510Z INFO Daemon Daemon Certificate with thumbprint 1DC021472EB998677816A0832CFFFE569B5D22D2 has no matching private key. Feb 9 19:25:39.806326 waagent[1325]: 2024-02-09T19:25:39.806264Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:25:39.822889 waagent[1325]: 2024-02-09T19:25:39.822830Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 90016fd2-931c-4fb5-b294-4aa79d3aba7f New eTag: 17519120369273516621] Feb 9 19:25:39.828262 waagent[1325]: 2024-02-09T19:25:39.828182Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:25:39.840883 waagent[1325]: 2024-02-09T19:25:39.840819Z INFO Daemon Daemon Starting provisioning Feb 9 19:25:39.843390 waagent[1325]: 2024-02-09T19:25:39.843332Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:25:39.845730 waagent[1325]: 2024-02-09T19:25:39.845673Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-75193cbbcb] Feb 9 19:25:39.902079 waagent[1325]: 2024-02-09T19:25:39.901927Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-75193cbbcb] Feb 9 19:25:39.906162 waagent[1325]: 2024-02-09T19:25:39.906085Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:25:39.909402 waagent[1325]: 2024-02-09T19:25:39.909343Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:25:39.923547 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:25:39.923792 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:25:39.923871 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:25:39.924218 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:25:39.928274 systemd-networkd[1088]: eth0: DHCPv6 lease lost Feb 9 19:25:39.929575 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:25:39.929776 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:25:39.931990 systemd[1]: Starting systemd-networkd.service... Feb 9 19:25:39.962251 systemd-networkd[1377]: enP47406s1: Link UP Feb 9 19:25:39.962259 systemd-networkd[1377]: enP47406s1: Gained carrier Feb 9 19:25:39.963680 systemd-networkd[1377]: eth0: Link UP Feb 9 19:25:39.963690 systemd-networkd[1377]: eth0: Gained carrier Feb 9 19:25:39.964112 systemd-networkd[1377]: lo: Link UP Feb 9 19:25:39.964121 systemd-networkd[1377]: lo: Gained carrier Feb 9 19:25:39.964461 systemd-networkd[1377]: eth0: Gained IPv6LL Feb 9 19:25:39.965525 systemd-networkd[1377]: Enumeration completed Feb 9 19:25:39.965620 systemd[1]: Started systemd-networkd.service. Feb 9 19:25:39.967582 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:25:39.971189 waagent[1325]: 2024-02-09T19:25:39.971028Z INFO Daemon Daemon Create user account if not exists Feb 9 19:25:39.978036 waagent[1325]: 2024-02-09T19:25:39.971691Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:25:39.978036 waagent[1325]: 2024-02-09T19:25:39.972941Z INFO Daemon Daemon Configure sudoer Feb 9 19:25:39.976869 systemd-networkd[1377]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:25:39.978654 waagent[1325]: 2024-02-09T19:25:39.978256Z INFO Daemon Daemon Configure sshd Feb 9 19:25:39.980518 waagent[1325]: 2024-02-09T19:25:39.980444Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:25:40.002894 waagent[1325]: 2024-02-09T19:25:40.002792Z INFO Daemon Daemon Decode custom data Feb 9 19:25:40.007412 waagent[1325]: 2024-02-09T19:25:40.003259Z INFO Daemon Daemon Save custom data Feb 9 19:25:40.018312 systemd-networkd[1377]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:25:40.020668 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:26:04.725427 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:26:10.263828 waagent[1325]: 2024-02-09T19:26:10.263721Z INFO Daemon Daemon Provisioning complete Feb 9 19:26:10.279116 waagent[1325]: 2024-02-09T19:26:10.279033Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:26:10.282665 waagent[1325]: 2024-02-09T19:26:10.282592Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:26:10.288433 waagent[1325]: 2024-02-09T19:26:10.288365Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:26:10.550792 waagent[1386]: 2024-02-09T19:26:10.550629Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:26:10.551530 waagent[1386]: 2024-02-09T19:26:10.551461Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:10.551684 waagent[1386]: 2024-02-09T19:26:10.551619Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:10.563045 waagent[1386]: 2024-02-09T19:26:10.562970Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:26:10.563202 waagent[1386]: 2024-02-09T19:26:10.563147Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:26:10.623336 waagent[1386]: 2024-02-09T19:26:10.623194Z INFO ExtHandler ExtHandler Found private key matching thumbprint AEAFE117DDFFFA04998AC68321E89A1E38363BC8 Feb 9 19:26:10.623547 waagent[1386]: 2024-02-09T19:26:10.623490Z INFO ExtHandler ExtHandler Certificate with thumbprint 1DC021472EB998677816A0832CFFFE569B5D22D2 has no matching private key. Feb 9 19:26:10.623776 waagent[1386]: 2024-02-09T19:26:10.623727Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:26:10.636993 waagent[1386]: 2024-02-09T19:26:10.636927Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: f6fdb226-cb29-40b3-9a0f-30c2db49fb2a New eTag: 17519120369273516621] Feb 9 19:26:10.637556 waagent[1386]: 2024-02-09T19:26:10.637499Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:26:10.847146 waagent[1386]: 2024-02-09T19:26:10.846991Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:26:10.876758 waagent[1386]: 2024-02-09T19:26:10.876671Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1386 Feb 9 19:26:10.880259 waagent[1386]: 2024-02-09T19:26:10.880181Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:26:10.881514 waagent[1386]: 2024-02-09T19:26:10.881456Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:26:11.047095 waagent[1386]: 2024-02-09T19:26:11.046999Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:26:11.047584 waagent[1386]: 2024-02-09T19:26:11.047518Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:26:11.055176 waagent[1386]: 2024-02-09T19:26:11.055120Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:26:11.055650 waagent[1386]: 2024-02-09T19:26:11.055593Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:26:11.056696 waagent[1386]: 2024-02-09T19:26:11.056630Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:26:11.057950 waagent[1386]: 2024-02-09T19:26:11.057890Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:26:11.058529 waagent[1386]: 2024-02-09T19:26:11.058473Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:11.058686 waagent[1386]: 2024-02-09T19:26:11.058638Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:11.059196 waagent[1386]: 2024-02-09T19:26:11.059139Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:26:11.059505 waagent[1386]: 2024-02-09T19:26:11.059445Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:26:11.059505 waagent[1386]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:26:11.059505 waagent[1386]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:26:11.059505 waagent[1386]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:26:11.059505 waagent[1386]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:11.059505 waagent[1386]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:11.059505 waagent[1386]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:11.062283 waagent[1386]: 2024-02-09T19:26:11.062175Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:26:11.062527 waagent[1386]: 2024-02-09T19:26:11.062471Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:11.063265 waagent[1386]: 2024-02-09T19:26:11.063189Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:26:11.063574 waagent[1386]: 2024-02-09T19:26:11.063522Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:11.064022 waagent[1386]: 2024-02-09T19:26:11.063968Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:26:11.064188 waagent[1386]: 2024-02-09T19:26:11.064141Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:26:11.064368 waagent[1386]: 2024-02-09T19:26:11.064320Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:26:11.064650 waagent[1386]: 2024-02-09T19:26:11.064600Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:26:11.065046 waagent[1386]: 2024-02-09T19:26:11.064988Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:26:11.065616 waagent[1386]: 2024-02-09T19:26:11.065563Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:26:11.066741 waagent[1386]: 2024-02-09T19:26:11.066679Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:26:11.076012 waagent[1386]: 2024-02-09T19:26:11.075971Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:26:11.076658 waagent[1386]: 2024-02-09T19:26:11.076619Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:26:11.077495 waagent[1386]: 2024-02-09T19:26:11.077448Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:26:11.092436 waagent[1386]: 2024-02-09T19:26:11.092381Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1377' Feb 9 19:26:11.143627 waagent[1386]: 2024-02-09T19:26:11.143500Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:26:11.243245 waagent[1386]: 2024-02-09T19:26:11.243123Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:26:11.243245 waagent[1386]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:26:11.243245 waagent[1386]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:26:11.243245 waagent[1386]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:e8:f1 brd ff:ff:ff:ff:ff:ff Feb 9 19:26:11.243245 waagent[1386]: 3: enP47406s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:e8:f1 brd ff:ff:ff:ff:ff:ff\ altname enP47406p0s2 Feb 9 19:26:11.243245 waagent[1386]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:26:11.243245 waagent[1386]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:26:11.243245 waagent[1386]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:26:11.243245 waagent[1386]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:26:11.243245 waagent[1386]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:26:11.243245 waagent[1386]: 2: eth0 inet6 fe80::20d:3aff:fed7:e8f1/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:26:11.429665 waagent[1386]: 2024-02-09T19:26:11.429536Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:26:12.292785 waagent[1325]: 2024-02-09T19:26:12.292601Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:26:12.299035 waagent[1325]: 2024-02-09T19:26:12.298959Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:26:13.252949 update_engine[1220]: I0209 19:26:13.252287 1220 update_attempter.cc:509] Updating boot flags... Feb 9 19:26:13.342878 waagent[1416]: 2024-02-09T19:26:13.340212Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:26:13.342878 waagent[1416]: 2024-02-09T19:26:13.341097Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:26:13.342878 waagent[1416]: 2024-02-09T19:26:13.341306Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:26:13.354858 waagent[1416]: 2024-02-09T19:26:13.354738Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:26:13.355371 waagent[1416]: 2024-02-09T19:26:13.355300Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:13.355560 waagent[1416]: 2024-02-09T19:26:13.355497Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:13.377445 waagent[1416]: 2024-02-09T19:26:13.377374Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:26:13.396739 waagent[1416]: 2024-02-09T19:26:13.396671Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:26:13.397675 waagent[1416]: 2024-02-09T19:26:13.397614Z INFO ExtHandler Feb 9 19:26:13.397824 waagent[1416]: 2024-02-09T19:26:13.397771Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 04e7fdfe-d8df-436a-83f5-2d3ca30698e9 eTag: 17519120369273516621 source: Fabric] Feb 9 19:26:13.398761 waagent[1416]: 2024-02-09T19:26:13.398701Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:26:13.399835 waagent[1416]: 2024-02-09T19:26:13.399775Z INFO ExtHandler Feb 9 19:26:13.399965 waagent[1416]: 2024-02-09T19:26:13.399914Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:26:13.407070 waagent[1416]: 2024-02-09T19:26:13.407023Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:26:13.407513 waagent[1416]: 2024-02-09T19:26:13.407465Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:26:13.430752 waagent[1416]: 2024-02-09T19:26:13.430697Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:26:13.495119 waagent[1416]: 2024-02-09T19:26:13.494992Z INFO ExtHandler Downloaded certificate {'thumbprint': 'AEAFE117DDFFFA04998AC68321E89A1E38363BC8', 'hasPrivateKey': True} Feb 9 19:26:13.496056 waagent[1416]: 2024-02-09T19:26:13.495983Z INFO ExtHandler Downloaded certificate {'thumbprint': '1DC021472EB998677816A0832CFFFE569B5D22D2', 'hasPrivateKey': False} Feb 9 19:26:13.497006 waagent[1416]: 2024-02-09T19:26:13.496945Z INFO ExtHandler Fetch goal state completed Feb 9 19:26:13.518211 waagent[1416]: 2024-02-09T19:26:13.518102Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1416 Feb 9 19:26:13.521371 waagent[1416]: 2024-02-09T19:26:13.521307Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:26:13.522809 waagent[1416]: 2024-02-09T19:26:13.522752Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:26:13.527504 waagent[1416]: 2024-02-09T19:26:13.527453Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:26:13.527851 waagent[1416]: 2024-02-09T19:26:13.527796Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:26:13.535513 waagent[1416]: 2024-02-09T19:26:13.535460Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:26:13.535946 waagent[1416]: 2024-02-09T19:26:13.535891Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:26:13.541588 waagent[1416]: 2024-02-09T19:26:13.541495Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 9 19:26:13.546165 waagent[1416]: 2024-02-09T19:26:13.546106Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:26:13.547490 waagent[1416]: 2024-02-09T19:26:13.547432Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:26:13.548079 waagent[1416]: 2024-02-09T19:26:13.548025Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:13.548255 waagent[1416]: 2024-02-09T19:26:13.548189Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:13.548777 waagent[1416]: 2024-02-09T19:26:13.548718Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:26:13.549050 waagent[1416]: 2024-02-09T19:26:13.548996Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:26:13.549050 waagent[1416]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:26:13.549050 waagent[1416]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:26:13.549050 waagent[1416]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:26:13.549050 waagent[1416]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:13.549050 waagent[1416]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:13.549050 waagent[1416]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:26:13.551190 waagent[1416]: 2024-02-09T19:26:13.551100Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:26:13.552096 waagent[1416]: 2024-02-09T19:26:13.552028Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:26:13.552490 waagent[1416]: 2024-02-09T19:26:13.552430Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:26:13.552782 waagent[1416]: 2024-02-09T19:26:13.552724Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:26:13.555167 waagent[1416]: 2024-02-09T19:26:13.554953Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:26:13.556595 waagent[1416]: 2024-02-09T19:26:13.556532Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:26:13.557046 waagent[1416]: 2024-02-09T19:26:13.556972Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:26:13.557287 waagent[1416]: 2024-02-09T19:26:13.557201Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:26:13.557587 waagent[1416]: 2024-02-09T19:26:13.557536Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:26:13.560215 waagent[1416]: 2024-02-09T19:26:13.559989Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:26:13.561951 waagent[1416]: 2024-02-09T19:26:13.561895Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:26:13.562120 waagent[1416]: 2024-02-09T19:26:13.562068Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:26:13.562120 waagent[1416]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:26:13.562120 waagent[1416]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:26:13.562120 waagent[1416]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:e8:f1 brd ff:ff:ff:ff:ff:ff Feb 9 19:26:13.562120 waagent[1416]: 3: enP47406s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:d7:e8:f1 brd ff:ff:ff:ff:ff:ff\ altname enP47406p0s2 Feb 9 19:26:13.562120 waagent[1416]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:26:13.562120 waagent[1416]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:26:13.562120 waagent[1416]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:26:13.562120 waagent[1416]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:26:13.562120 waagent[1416]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:26:13.562120 waagent[1416]: 2: eth0 inet6 fe80::20d:3aff:fed7:e8f1/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:26:13.577343 waagent[1416]: 2024-02-09T19:26:13.577261Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:26:13.579871 waagent[1416]: 2024-02-09T19:26:13.579819Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:26:13.636950 waagent[1416]: 2024-02-09T19:26:13.636890Z INFO ExtHandler ExtHandler Feb 9 19:26:13.637816 waagent[1416]: 2024-02-09T19:26:13.637757Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: ac433512-92a2-4e44-91cc-9a8a2eb670d1 correlation c3f562af-1643-4b7a-8e4c-f42fcfdc8219 created: 2024-02-09T19:18:51.514076Z] Feb 9 19:26:13.638808 waagent[1416]: 2024-02-09T19:26:13.638745Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:26:13.640690 waagent[1416]: 2024-02-09T19:26:13.640635Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] Feb 9 19:26:13.666209 waagent[1416]: 2024-02-09T19:26:13.666158Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:26:13.678552 waagent[1416]: 2024-02-09T19:26:13.678473Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: AB49F781-177C-47D1-B4CA-7B9EB7CD1E56;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:26:13.757327 waagent[1416]: 2024-02-09T19:26:13.757185Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 9 19:26:13.757327 waagent[1416]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.757327 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.757327 waagent[1416]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.757327 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.757327 waagent[1416]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.757327 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.757327 waagent[1416]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:26:13.757327 waagent[1416]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:26:13.757327 waagent[1416]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:26:13.764290 waagent[1416]: 2024-02-09T19:26:13.764177Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:26:13.764290 waagent[1416]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.764290 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.764290 waagent[1416]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.764290 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.764290 waagent[1416]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:26:13.764290 waagent[1416]: pkts bytes target prot opt in out source destination Feb 9 19:26:13.764290 waagent[1416]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:26:13.764290 waagent[1416]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:26:13.764290 waagent[1416]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:26:13.764831 waagent[1416]: 2024-02-09T19:26:13.764779Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:26:59.838325 systemd[1]: Created slice system-sshd.slice. Feb 9 19:26:59.840183 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.12.6:44918.service. Feb 9 19:27:00.834576 sshd[1501]: Accepted publickey for core from 10.200.12.6 port 44918 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:00.836208 sshd[1501]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:00.841347 systemd-logind[1218]: New session 3 of user core. Feb 9 19:27:00.842317 systemd[1]: Started session-3.scope. Feb 9 19:27:01.368571 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.12.6:44934.service. Feb 9 19:27:01.980358 sshd[1506]: Accepted publickey for core from 10.200.12.6 port 44934 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:01.981960 sshd[1506]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:01.986682 systemd[1]: Started session-4.scope. Feb 9 19:27:01.987380 systemd-logind[1218]: New session 4 of user core. Feb 9 19:27:02.417701 sshd[1506]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:02.420707 systemd[1]: sshd@1-10.200.8.10:22-10.200.12.6:44934.service: Deactivated successfully. Feb 9 19:27:02.421545 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:27:02.422158 systemd-logind[1218]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:27:02.422893 systemd-logind[1218]: Removed session 4. Feb 9 19:27:02.522989 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.12.6:44950.service. Feb 9 19:27:03.149314 sshd[1512]: Accepted publickey for core from 10.200.12.6 port 44950 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:03.150885 sshd[1512]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:03.156308 systemd-logind[1218]: New session 5 of user core. Feb 9 19:27:03.156476 systemd[1]: Started session-5.scope. Feb 9 19:27:03.584649 sshd[1512]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:03.587508 systemd[1]: sshd@2-10.200.8.10:22-10.200.12.6:44950.service: Deactivated successfully. Feb 9 19:27:03.588306 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:27:03.588943 systemd-logind[1218]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:27:03.589686 systemd-logind[1218]: Removed session 5. Feb 9 19:27:03.688538 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.12.6:44964.service. Feb 9 19:27:04.307148 sshd[1518]: Accepted publickey for core from 10.200.12.6 port 44964 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:04.308697 sshd[1518]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:04.313275 systemd[1]: Started session-6.scope. Feb 9 19:27:04.313944 systemd-logind[1218]: New session 6 of user core. Feb 9 19:27:04.747892 sshd[1518]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:04.751001 systemd[1]: sshd@3-10.200.8.10:22-10.200.12.6:44964.service: Deactivated successfully. Feb 9 19:27:04.751960 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:27:04.752753 systemd-logind[1218]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:27:04.753666 systemd-logind[1218]: Removed session 6. Feb 9 19:27:04.853851 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.12.6:44980.service. Feb 9 19:27:05.472498 sshd[1524]: Accepted publickey for core from 10.200.12.6 port 44980 ssh2: RSA SHA256:UCg2Ip0M7lmxHNP/TAuTHg4CQiclKI5wMYrrQY/d4l4 Feb 9 19:27:05.474078 sshd[1524]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:27:05.479733 systemd[1]: Started session-7.scope. Feb 9 19:27:05.480310 systemd-logind[1218]: New session 7 of user core. Feb 9 19:27:06.087922 sudo[1527]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:27:06.088271 sudo[1527]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:27:07.233857 systemd[1]: Reloading. Feb 9 19:27:07.318835 /usr/lib/systemd/system-generators/torcx-generator[1556]: time="2024-02-09T19:27:07Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:07.321764 /usr/lib/systemd/system-generators/torcx-generator[1556]: time="2024-02-09T19:27:07Z" level=info msg="torcx already run" Feb 9 19:27:07.407436 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:07.407460 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:07.425146 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:07.507521 systemd[1]: Started kubelet.service. Feb 9 19:27:07.538154 systemd[1]: Starting coreos-metadata.service... Feb 9 19:27:07.578596 kubelet[1618]: E0209 19:27:07.578532 1618 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 9 19:27:07.583526 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:27:07.583696 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:27:07.593533 coreos-metadata[1626]: Feb 09 19:27:07.593 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:27:07.596166 coreos-metadata[1626]: Feb 09 19:27:07.596 INFO Fetch successful Feb 9 19:27:07.596268 coreos-metadata[1626]: Feb 09 19:27:07.596 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 19:27:07.597461 coreos-metadata[1626]: Feb 09 19:27:07.597 INFO Fetch successful Feb 9 19:27:07.597868 coreos-metadata[1626]: Feb 09 19:27:07.597 INFO Fetching http://168.63.129.16/machine/d1174835-6149-488b-acc1-e0f2478ac783/721275ca%2D379d%2D4a54%2Dac5b%2D89762ea6712b.%5Fci%2D3510.3.2%2Da%2D75193cbbcb?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 19:27:07.599459 coreos-metadata[1626]: Feb 09 19:27:07.599 INFO Fetch successful Feb 9 19:27:07.631417 coreos-metadata[1626]: Feb 09 19:27:07.631 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:27:07.640794 coreos-metadata[1626]: Feb 09 19:27:07.640 INFO Fetch successful Feb 9 19:27:07.649394 systemd[1]: Finished coreos-metadata.service. Feb 9 19:27:12.448055 systemd[1]: Stopped kubelet.service. Feb 9 19:27:12.461929 systemd[1]: Reloading. Feb 9 19:27:12.546625 /usr/lib/systemd/system-generators/torcx-generator[1683]: time="2024-02-09T19:27:12Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:27:12.547045 /usr/lib/systemd/system-generators/torcx-generator[1683]: time="2024-02-09T19:27:12Z" level=info msg="torcx already run" Feb 9 19:27:12.634253 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:27:12.634273 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:27:12.652056 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:27:12.740478 systemd[1]: Started kubelet.service. Feb 9 19:27:12.784481 kubelet[1745]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:27:12.784481 kubelet[1745]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 9 19:27:12.784481 kubelet[1745]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:27:12.784918 kubelet[1745]: I0209 19:27:12.784528 1745 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:27:13.364290 kubelet[1745]: I0209 19:27:13.364248 1745 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 9 19:27:13.364290 kubelet[1745]: I0209 19:27:13.364275 1745 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:27:13.364555 kubelet[1745]: I0209 19:27:13.364536 1745 server.go:895] "Client rotation is on, will bootstrap in background" Feb 9 19:27:13.366867 kubelet[1745]: I0209 19:27:13.366838 1745 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:27:13.372432 kubelet[1745]: I0209 19:27:13.372414 1745 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:27:13.372767 kubelet[1745]: I0209 19:27:13.372752 1745 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:27:13.373053 kubelet[1745]: I0209 19:27:13.373027 1745 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 9 19:27:13.373247 kubelet[1745]: I0209 19:27:13.373235 1745 topology_manager.go:138] "Creating topology manager with none policy" Feb 9 19:27:13.373328 kubelet[1745]: I0209 19:27:13.373319 1745 container_manager_linux.go:301] "Creating device plugin manager" Feb 9 19:27:13.373494 kubelet[1745]: I0209 19:27:13.373481 1745 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:27:13.373652 kubelet[1745]: I0209 19:27:13.373643 1745 kubelet.go:393] "Attempting to sync node with API server" Feb 9 19:27:13.373726 kubelet[1745]: I0209 19:27:13.373717 1745 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:27:13.373804 kubelet[1745]: I0209 19:27:13.373794 1745 kubelet.go:309] "Adding apiserver pod source" Feb 9 19:27:13.373882 kubelet[1745]: I0209 19:27:13.373872 1745 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:27:13.374338 kubelet[1745]: E0209 19:27:13.374318 1745 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:13.374423 kubelet[1745]: E0209 19:27:13.374389 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:13.374740 kubelet[1745]: I0209 19:27:13.374726 1745 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:27:13.375069 kubelet[1745]: W0209 19:27:13.375042 1745 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:27:13.375569 kubelet[1745]: I0209 19:27:13.375548 1745 server.go:1232] "Started kubelet" Feb 9 19:27:13.376873 kubelet[1745]: I0209 19:27:13.376859 1745 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 9 19:27:13.377179 kubelet[1745]: I0209 19:27:13.377167 1745 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 9 19:27:13.377291 kubelet[1745]: I0209 19:27:13.377282 1745 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:27:13.377964 kubelet[1745]: I0209 19:27:13.377949 1745 server.go:462] "Adding debug handlers to kubelet server" Feb 9 19:27:13.379779 kubelet[1745]: E0209 19:27:13.379765 1745 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:27:13.379879 kubelet[1745]: E0209 19:27:13.379871 1745 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:27:13.381738 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:27:13.381928 kubelet[1745]: I0209 19:27:13.381910 1745 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:27:13.383871 kubelet[1745]: E0209 19:27:13.383858 1745 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.10\" not found" Feb 9 19:27:13.384538 kubelet[1745]: I0209 19:27:13.384152 1745 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 9 19:27:13.384605 kubelet[1745]: I0209 19:27:13.384176 1745 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:27:13.384657 kubelet[1745]: I0209 19:27:13.384639 1745 reconciler_new.go:29] "Reconciler: start to sync state" Feb 9 19:27:13.392094 kubelet[1745]: E0209 19:27:13.392065 1745 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 9 19:27:13.395451 kubelet[1745]: W0209 19:27:13.395417 1745 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:13.395576 kubelet[1745]: E0209 19:27:13.395564 1745 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:27:13.416355 kubelet[1745]: W0209 19:27:13.416334 1745 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:13.416452 kubelet[1745]: E0209 19:27:13.416362 1745 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:13.416452 kubelet[1745]: W0209 19:27:13.416410 1745 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:13.416452 kubelet[1745]: E0209 19:27:13.416424 1745 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.10" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:27:13.416591 kubelet[1745]: E0209 19:27:13.416459 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b0ee67c59", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 375525977, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 375525977, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.417866 kubelet[1745]: E0209 19:27:13.417780 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b0f28a46b", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 379861611, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 379861611, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.421081 kubelet[1745]: E0209 19:27:13.421020 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191a733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420298035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420298035, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.421331 kubelet[1745]: I0209 19:27:13.421312 1745 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:27:13.421331 kubelet[1745]: I0209 19:27:13.421330 1745 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:27:13.421451 kubelet[1745]: I0209 19:27:13.421346 1745 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:27:13.421891 kubelet[1745]: E0209 19:27:13.421838 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191b5a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420301735, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420301735, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.422598 kubelet[1745]: E0209 19:27:13.422543 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191c033", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420304435, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420304435, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.426302 kubelet[1745]: I0209 19:27:13.426288 1745 policy_none.go:49] "None policy: Start" Feb 9 19:27:13.426903 kubelet[1745]: I0209 19:27:13.426889 1745 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:27:13.427059 kubelet[1745]: I0209 19:27:13.427043 1745 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:27:13.433894 systemd[1]: Created slice kubepods.slice. Feb 9 19:27:13.437831 systemd[1]: Created slice kubepods-burstable.slice. Feb 9 19:27:13.440286 systemd[1]: Created slice kubepods-besteffort.slice. Feb 9 19:27:13.445824 kubelet[1745]: I0209 19:27:13.445795 1745 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:27:13.446039 kubelet[1745]: I0209 19:27:13.446030 1745 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:27:13.448105 kubelet[1745]: E0209 19:27:13.447926 1745 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.10\" not found" Feb 9 19:27:13.448742 kubelet[1745]: E0209 19:27:13.448672 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1332eacd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 447643853, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 447643853, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.485968 kubelet[1745]: I0209 19:27:13.485941 1745 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 9 19:27:13.487338 kubelet[1745]: E0209 19:27:13.487317 1745 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 9 19:27:13.487566 kubelet[1745]: E0209 19:27:13.487486 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191a733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420298035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 485885259, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191a733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.488418 kubelet[1745]: E0209 19:27:13.488354 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191b5a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420301735, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 485911659, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191b5a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.489258 kubelet[1745]: E0209 19:27:13.489176 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191c033", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420304435, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 485914759, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191c033" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.516903 kubelet[1745]: I0209 19:27:13.516877 1745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 9 19:27:13.518310 kubelet[1745]: I0209 19:27:13.518217 1745 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 9 19:27:13.518434 kubelet[1745]: I0209 19:27:13.518422 1745 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 9 19:27:13.518516 kubelet[1745]: I0209 19:27:13.518504 1745 kubelet.go:2303] "Starting kubelet main sync loop" Feb 9 19:27:13.518624 kubelet[1745]: E0209 19:27:13.518614 1745 kubelet.go:2327] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:27:13.520019 kubelet[1745]: W0209 19:27:13.520001 1745 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:13.520151 kubelet[1745]: E0209 19:27:13.520138 1745 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:27:13.593461 kubelet[1745]: E0209 19:27:13.593432 1745 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 9 19:27:13.688314 kubelet[1745]: I0209 19:27:13.688164 1745 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 9 19:27:13.689656 kubelet[1745]: E0209 19:27:13.689634 1745 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 9 19:27:13.690991 kubelet[1745]: E0209 19:27:13.690911 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191a733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420298035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 688115875, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191a733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.691983 kubelet[1745]: E0209 19:27:13.691915 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191b5a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420301735, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 688123175, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191b5a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.692955 kubelet[1745]: E0209 19:27:13.692895 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191c033", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420304435, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 688126975, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191c033" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:13.995924 kubelet[1745]: E0209 19:27:13.995798 1745 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.10\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 9 19:27:14.091210 kubelet[1745]: I0209 19:27:14.091167 1745 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 9 19:27:14.093146 kubelet[1745]: E0209 19:27:14.092782 1745 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.10" Feb 9 19:27:14.093146 kubelet[1745]: E0209 19:27:14.092775 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191a733", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.10 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420298035, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 14, 91116682, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191a733" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:14.093922 kubelet[1745]: E0209 19:27:14.093847 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191b5a7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.10 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420301735, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 14, 91129082, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191b5a7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:14.094919 kubelet[1745]: E0209 19:27:14.094846 1745 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.10.17b2486b1191c033", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.10", UID:"10.200.8.10", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.10 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.10"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 27, 13, 420304435, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 27, 14, 91133882, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.10"}': 'events "10.200.8.10.17b2486b1191c033" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:27:14.249010 kubelet[1745]: W0209 19:27:14.248884 1745 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:14.249010 kubelet[1745]: E0209 19:27:14.248924 1745 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:27:14.366672 kubelet[1745]: I0209 19:27:14.366607 1745 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:27:14.374874 kubelet[1745]: E0209 19:27:14.374843 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:14.759963 kubelet[1745]: E0209 19:27:14.759911 1745 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.10" not found Feb 9 19:27:14.799033 kubelet[1745]: E0209 19:27:14.798981 1745 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.10\" not found" node="10.200.8.10" Feb 9 19:27:14.894576 kubelet[1745]: I0209 19:27:14.894543 1745 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.10" Feb 9 19:27:14.898808 kubelet[1745]: I0209 19:27:14.898781 1745 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.10" Feb 9 19:27:14.912235 kubelet[1745]: I0209 19:27:14.912200 1745 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:27:14.912610 env[1263]: time="2024-02-09T19:27:14.912570717Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:27:14.912987 kubelet[1745]: I0209 19:27:14.912749 1745 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:27:15.256893 sudo[1527]: pam_unix(sudo:session): session closed for user root Feb 9 19:27:15.374814 sshd[1524]: pam_unix(sshd:session): session closed for user core Feb 9 19:27:15.376925 kubelet[1745]: E0209 19:27:15.376900 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:15.377924 kubelet[1745]: I0209 19:27:15.377690 1745 apiserver.go:52] "Watching apiserver" Feb 9 19:27:15.378156 systemd[1]: sshd@4-10.200.8.10:22-10.200.12.6:44980.service: Deactivated successfully. Feb 9 19:27:15.379179 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:27:15.379894 systemd-logind[1218]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:27:15.380862 systemd-logind[1218]: Removed session 7. Feb 9 19:27:15.381738 kubelet[1745]: I0209 19:27:15.381716 1745 topology_manager.go:215] "Topology Admit Handler" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" podNamespace="kube-system" podName="cilium-l9mrv" Feb 9 19:27:15.381918 kubelet[1745]: I0209 19:27:15.381896 1745 topology_manager.go:215] "Topology Admit Handler" podUID="c9576cb0-7646-4053-afe2-ae6117450054" podNamespace="kube-system" podName="kube-proxy-78cbd" Feb 9 19:27:15.386513 kubelet[1745]: I0209 19:27:15.386437 1745 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:27:15.387323 systemd[1]: Created slice kubepods-besteffort-podc9576cb0_7646_4053_afe2_ae6117450054.slice. Feb 9 19:27:15.396073 kubelet[1745]: I0209 19:27:15.396052 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/c9576cb0-7646-4053-afe2-ae6117450054-kube-proxy\") pod \"kube-proxy-78cbd\" (UID: \"c9576cb0-7646-4053-afe2-ae6117450054\") " pod="kube-system/kube-proxy-78cbd" Feb 9 19:27:15.396172 kubelet[1745]: I0209 19:27:15.396090 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-bpf-maps\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396172 kubelet[1745]: I0209 19:27:15.396116 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-hostproc\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396172 kubelet[1745]: I0209 19:27:15.396143 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-cgroup\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396172 kubelet[1745]: I0209 19:27:15.396172 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjnzq\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-kube-api-access-cjnzq\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396354 kubelet[1745]: I0209 19:27:15.396200 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-run\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396354 kubelet[1745]: I0209 19:27:15.396241 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-net\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396354 kubelet[1745]: I0209 19:27:15.396271 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-hubble-tls\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396354 kubelet[1745]: I0209 19:27:15.396314 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-lib-modules\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396508 kubelet[1745]: I0209 19:27:15.396356 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-config-path\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396508 kubelet[1745]: I0209 19:27:15.396388 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gr56t\" (UniqueName: \"kubernetes.io/projected/c9576cb0-7646-4053-afe2-ae6117450054-kube-api-access-gr56t\") pod \"kube-proxy-78cbd\" (UID: \"c9576cb0-7646-4053-afe2-ae6117450054\") " pod="kube-system/kube-proxy-78cbd" Feb 9 19:27:15.396508 kubelet[1745]: I0209 19:27:15.396416 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c0c2e2-7189-40b8-b764-055e8c766bde-clustermesh-secrets\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396508 kubelet[1745]: I0209 19:27:15.396444 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-kernel\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396508 kubelet[1745]: I0209 19:27:15.396472 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c9576cb0-7646-4053-afe2-ae6117450054-xtables-lock\") pod \"kube-proxy-78cbd\" (UID: \"c9576cb0-7646-4053-afe2-ae6117450054\") " pod="kube-system/kube-proxy-78cbd" Feb 9 19:27:15.396687 kubelet[1745]: I0209 19:27:15.396502 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c9576cb0-7646-4053-afe2-ae6117450054-lib-modules\") pod \"kube-proxy-78cbd\" (UID: \"c9576cb0-7646-4053-afe2-ae6117450054\") " pod="kube-system/kube-proxy-78cbd" Feb 9 19:27:15.396687 kubelet[1745]: I0209 19:27:15.396533 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cni-path\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396687 kubelet[1745]: I0209 19:27:15.396561 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-etc-cni-netd\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.396687 kubelet[1745]: I0209 19:27:15.396590 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-xtables-lock\") pod \"cilium-l9mrv\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " pod="kube-system/cilium-l9mrv" Feb 9 19:27:15.399063 systemd[1]: Created slice kubepods-burstable-poda6c0c2e2_7189_40b8_b764_055e8c766bde.slice. Feb 9 19:27:15.698253 env[1263]: time="2024-02-09T19:27:15.698174564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-78cbd,Uid:c9576cb0-7646-4053-afe2-ae6117450054,Namespace:kube-system,Attempt:0,}" Feb 9 19:27:15.704858 env[1263]: time="2024-02-09T19:27:15.704800315Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9mrv,Uid:a6c0c2e2-7189-40b8-b764-055e8c766bde,Namespace:kube-system,Attempt:0,}" Feb 9 19:27:16.378068 kubelet[1745]: E0209 19:27:16.378032 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:16.506834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount103196313.mount: Deactivated successfully. Feb 9 19:27:16.527844 env[1263]: time="2024-02-09T19:27:16.527738257Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.531885 env[1263]: time="2024-02-09T19:27:16.531851088Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.542695 env[1263]: time="2024-02-09T19:27:16.542662970Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.546256 env[1263]: time="2024-02-09T19:27:16.546211296Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.548587 env[1263]: time="2024-02-09T19:27:16.548555914Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.552084 env[1263]: time="2024-02-09T19:27:16.552055240Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.554657 env[1263]: time="2024-02-09T19:27:16.554626160Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.560545 env[1263]: time="2024-02-09T19:27:16.560511204Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:16.617154 env[1263]: time="2024-02-09T19:27:16.617096330Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:16.617323 env[1263]: time="2024-02-09T19:27:16.617140631Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:16.617323 env[1263]: time="2024-02-09T19:27:16.617153931Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:16.617475 env[1263]: time="2024-02-09T19:27:16.617351232Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/0f16796ebc05e61ac027785824f04961481eae26874ca2a4b3ff2fe688ce3631 pid=1791 runtime=io.containerd.runc.v2 Feb 9 19:27:16.625916 env[1263]: time="2024-02-09T19:27:16.625858496Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:16.626069 env[1263]: time="2024-02-09T19:27:16.626041798Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:16.626194 env[1263]: time="2024-02-09T19:27:16.626162399Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:16.626572 env[1263]: time="2024-02-09T19:27:16.626529101Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995 pid=1807 runtime=io.containerd.runc.v2 Feb 9 19:27:16.641706 systemd[1]: Started cri-containerd-0f16796ebc05e61ac027785824f04961481eae26874ca2a4b3ff2fe688ce3631.scope. Feb 9 19:27:16.656116 systemd[1]: Started cri-containerd-2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995.scope. Feb 9 19:27:16.685694 env[1263]: time="2024-02-09T19:27:16.685641146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-78cbd,Uid:c9576cb0-7646-4053-afe2-ae6117450054,Namespace:kube-system,Attempt:0,} returns sandbox id \"0f16796ebc05e61ac027785824f04961481eae26874ca2a4b3ff2fe688ce3631\"" Feb 9 19:27:16.688243 env[1263]: time="2024-02-09T19:27:16.688193666Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 9 19:27:16.691376 env[1263]: time="2024-02-09T19:27:16.691343589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-l9mrv,Uid:a6c0c2e2-7189-40b8-b764-055e8c766bde,Namespace:kube-system,Attempt:0,} returns sandbox id \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\"" Feb 9 19:27:17.378910 kubelet[1745]: E0209 19:27:17.378876 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:17.703833 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount974775120.mount: Deactivated successfully. Feb 9 19:27:18.267105 env[1263]: time="2024-02-09T19:27:18.267052938Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:18.272514 env[1263]: time="2024-02-09T19:27:18.272471877Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:18.276710 env[1263]: time="2024-02-09T19:27:18.276674607Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:18.279692 env[1263]: time="2024-02-09T19:27:18.279662629Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:18.280055 env[1263]: time="2024-02-09T19:27:18.280026932Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 9 19:27:18.281466 env[1263]: time="2024-02-09T19:27:18.281437542Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:27:18.282299 env[1263]: time="2024-02-09T19:27:18.282269048Z" level=info msg="CreateContainer within sandbox \"0f16796ebc05e61ac027785824f04961481eae26874ca2a4b3ff2fe688ce3631\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:27:18.315737 env[1263]: time="2024-02-09T19:27:18.315698090Z" level=info msg="CreateContainer within sandbox \"0f16796ebc05e61ac027785824f04961481eae26874ca2a4b3ff2fe688ce3631\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"efa9737295dde3163f5a883ce38fb5a442ea85e38b76f3ea565c56c218909e26\"" Feb 9 19:27:18.316395 env[1263]: time="2024-02-09T19:27:18.316354095Z" level=info msg="StartContainer for \"efa9737295dde3163f5a883ce38fb5a442ea85e38b76f3ea565c56c218909e26\"" Feb 9 19:27:18.336615 systemd[1]: Started cri-containerd-efa9737295dde3163f5a883ce38fb5a442ea85e38b76f3ea565c56c218909e26.scope. Feb 9 19:27:18.369955 env[1263]: time="2024-02-09T19:27:18.369916483Z" level=info msg="StartContainer for \"efa9737295dde3163f5a883ce38fb5a442ea85e38b76f3ea565c56c218909e26\" returns successfully" Feb 9 19:27:18.379652 kubelet[1745]: E0209 19:27:18.379624 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:19.379875 kubelet[1745]: E0209 19:27:19.379795 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:20.380493 kubelet[1745]: E0209 19:27:20.380388 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:21.380830 kubelet[1745]: E0209 19:27:21.380798 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:22.381751 kubelet[1745]: E0209 19:27:22.381683 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:23.382661 kubelet[1745]: E0209 19:27:23.382620 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:23.800590 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3490582488.mount: Deactivated successfully. Feb 9 19:27:24.382800 kubelet[1745]: E0209 19:27:24.382760 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:25.382958 kubelet[1745]: E0209 19:27:25.382890 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:26.383484 kubelet[1745]: E0209 19:27:26.383435 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:26.465452 env[1263]: time="2024-02-09T19:27:26.465402539Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:26.471825 env[1263]: time="2024-02-09T19:27:26.471782579Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:26.476333 env[1263]: time="2024-02-09T19:27:26.476287007Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:26.476928 env[1263]: time="2024-02-09T19:27:26.476888811Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:27:26.479387 env[1263]: time="2024-02-09T19:27:26.479357326Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:27:26.512386 env[1263]: time="2024-02-09T19:27:26.512333033Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\"" Feb 9 19:27:26.512926 env[1263]: time="2024-02-09T19:27:26.512892337Z" level=info msg="StartContainer for \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\"" Feb 9 19:27:26.537996 systemd[1]: Started cri-containerd-433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7.scope. Feb 9 19:27:26.570079 env[1263]: time="2024-02-09T19:27:26.570014695Z" level=info msg="StartContainer for \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\" returns successfully" Feb 9 19:27:26.574770 systemd[1]: cri-containerd-433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7.scope: Deactivated successfully. Feb 9 19:27:27.384004 kubelet[1745]: E0209 19:27:27.383946 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:27.496565 systemd[1]: run-containerd-runc-k8s.io-433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7-runc.XbodRW.mount: Deactivated successfully. Feb 9 19:27:27.496711 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7-rootfs.mount: Deactivated successfully. Feb 9 19:27:27.563334 kubelet[1745]: I0209 19:27:27.563297 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-78cbd" podStartSLOduration=11.970265995 podCreationTimestamp="2024-02-09 19:27:14 +0000 UTC" firstStartedPulling="2024-02-09 19:27:16.687513961 +0000 UTC m=+3.942510267" lastFinishedPulling="2024-02-09 19:27:18.280512335 +0000 UTC m=+5.535508541" observedRunningTime="2024-02-09 19:27:18.539704414 +0000 UTC m=+5.794700720" watchObservedRunningTime="2024-02-09 19:27:27.563264269 +0000 UTC m=+14.818260475" Feb 9 19:27:28.384110 kubelet[1745]: E0209 19:27:28.384068 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:29.385064 kubelet[1745]: E0209 19:27:29.385012 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:30.385512 kubelet[1745]: E0209 19:27:30.385452 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:30.863204 env[1263]: time="2024-02-09T19:27:30.863140574Z" level=info msg="shim disconnected" id=433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7 Feb 9 19:27:30.863204 env[1263]: time="2024-02-09T19:27:30.863199974Z" level=warning msg="cleaning up after shim disconnected" id=433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7 namespace=k8s.io Feb 9 19:27:30.863722 env[1263]: time="2024-02-09T19:27:30.863212274Z" level=info msg="cleaning up dead shim" Feb 9 19:27:30.871644 env[1263]: time="2024-02-09T19:27:30.871598923Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:30Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2074 runtime=io.containerd.runc.v2\n" Feb 9 19:27:31.385713 kubelet[1745]: E0209 19:27:31.385655 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:31.565613 env[1263]: time="2024-02-09T19:27:31.565571548Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:27:31.596681 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1034729502.mount: Deactivated successfully. Feb 9 19:27:31.608072 env[1263]: time="2024-02-09T19:27:31.608031594Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\"" Feb 9 19:27:31.610522 env[1263]: time="2024-02-09T19:27:31.610488808Z" level=info msg="StartContainer for \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\"" Feb 9 19:27:31.633271 systemd[1]: Started cri-containerd-2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601.scope. Feb 9 19:27:31.663607 env[1263]: time="2024-02-09T19:27:31.662180707Z" level=info msg="StartContainer for \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\" returns successfully" Feb 9 19:27:31.671144 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:27:31.671451 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:27:31.671633 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:27:31.674032 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:27:31.680218 systemd[1]: cri-containerd-2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601.scope: Deactivated successfully. Feb 9 19:27:31.685381 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:27:31.709121 env[1263]: time="2024-02-09T19:27:31.709071178Z" level=info msg="shim disconnected" id=2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601 Feb 9 19:27:31.709121 env[1263]: time="2024-02-09T19:27:31.709121078Z" level=warning msg="cleaning up after shim disconnected" id=2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601 namespace=k8s.io Feb 9 19:27:31.709447 env[1263]: time="2024-02-09T19:27:31.709131878Z" level=info msg="cleaning up dead shim" Feb 9 19:27:31.716477 env[1263]: time="2024-02-09T19:27:31.716441120Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:31Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2139 runtime=io.containerd.runc.v2\n" Feb 9 19:27:32.385937 kubelet[1745]: E0209 19:27:32.385876 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:32.569485 env[1263]: time="2024-02-09T19:27:32.569432802Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:27:32.593581 systemd[1]: run-containerd-runc-k8s.io-2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601-runc.AH4BKt.mount: Deactivated successfully. Feb 9 19:27:32.593713 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601-rootfs.mount: Deactivated successfully. Feb 9 19:27:32.605435 env[1263]: time="2024-02-09T19:27:32.605382506Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\"" Feb 9 19:27:32.606090 env[1263]: time="2024-02-09T19:27:32.606059210Z" level=info msg="StartContainer for \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\"" Feb 9 19:27:32.633824 systemd[1]: Started cri-containerd-cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd.scope. Feb 9 19:27:32.661776 systemd[1]: cri-containerd-cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd.scope: Deactivated successfully. Feb 9 19:27:32.665757 env[1263]: time="2024-02-09T19:27:32.665709050Z" level=info msg="StartContainer for \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\" returns successfully" Feb 9 19:27:32.683663 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd-rootfs.mount: Deactivated successfully. Feb 9 19:27:32.698601 env[1263]: time="2024-02-09T19:27:32.698550137Z" level=info msg="shim disconnected" id=cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd Feb 9 19:27:32.698601 env[1263]: time="2024-02-09T19:27:32.698600537Z" level=warning msg="cleaning up after shim disconnected" id=cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd namespace=k8s.io Feb 9 19:27:32.698906 env[1263]: time="2024-02-09T19:27:32.698611937Z" level=info msg="cleaning up dead shim" Feb 9 19:27:32.706506 env[1263]: time="2024-02-09T19:27:32.706465182Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:32Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2198 runtime=io.containerd.runc.v2\n" Feb 9 19:27:33.374133 kubelet[1745]: E0209 19:27:33.374090 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:33.386723 kubelet[1745]: E0209 19:27:33.386698 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:33.573692 env[1263]: time="2024-02-09T19:27:33.573646669Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:27:33.599257 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2297778590.mount: Deactivated successfully. Feb 9 19:27:33.610395 env[1263]: time="2024-02-09T19:27:33.610348475Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\"" Feb 9 19:27:33.611438 env[1263]: time="2024-02-09T19:27:33.611388980Z" level=info msg="StartContainer for \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\"" Feb 9 19:27:33.636408 systemd[1]: Started cri-containerd-bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9.scope. Feb 9 19:27:33.660834 systemd[1]: cri-containerd-bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9.scope: Deactivated successfully. Feb 9 19:27:33.662531 env[1263]: time="2024-02-09T19:27:33.662460867Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda6c0c2e2_7189_40b8_b764_055e8c766bde.slice/cri-containerd-bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9.scope/memory.events\": no such file or directory" Feb 9 19:27:33.667337 env[1263]: time="2024-02-09T19:27:33.667301494Z" level=info msg="StartContainer for \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\" returns successfully" Feb 9 19:27:33.693920 env[1263]: time="2024-02-09T19:27:33.693874143Z" level=info msg="shim disconnected" id=bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9 Feb 9 19:27:33.693920 env[1263]: time="2024-02-09T19:27:33.693916943Z" level=warning msg="cleaning up after shim disconnected" id=bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9 namespace=k8s.io Feb 9 19:27:33.694297 env[1263]: time="2024-02-09T19:27:33.693927943Z" level=info msg="cleaning up dead shim" Feb 9 19:27:33.701995 env[1263]: time="2024-02-09T19:27:33.701944088Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:27:33Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2253 runtime=io.containerd.runc.v2\n" Feb 9 19:27:34.387219 kubelet[1745]: E0209 19:27:34.387161 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:34.579318 env[1263]: time="2024-02-09T19:27:34.579173758Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:27:34.594388 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9-rootfs.mount: Deactivated successfully. Feb 9 19:27:34.611057 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3405820814.mount: Deactivated successfully. Feb 9 19:27:34.623264 env[1263]: time="2024-02-09T19:27:34.623207601Z" level=info msg="CreateContainer within sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\"" Feb 9 19:27:34.623827 env[1263]: time="2024-02-09T19:27:34.623794904Z" level=info msg="StartContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\"" Feb 9 19:27:34.645921 systemd[1]: Started cri-containerd-5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c.scope. Feb 9 19:27:34.690403 env[1263]: time="2024-02-09T19:27:34.690343772Z" level=info msg="StartContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" returns successfully" Feb 9 19:27:34.813848 kubelet[1745]: I0209 19:27:34.813819 1745 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:27:35.276257 kernel: Initializing XFRM netlink socket Feb 9 19:27:35.388346 kubelet[1745]: E0209 19:27:35.388303 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:36.388602 kubelet[1745]: E0209 19:27:36.388545 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:36.919884 systemd-networkd[1377]: cilium_host: Link UP Feb 9 19:27:36.920004 systemd-networkd[1377]: cilium_net: Link UP Feb 9 19:27:36.923615 systemd-networkd[1377]: cilium_net: Gained carrier Feb 9 19:27:36.926952 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:27:36.927027 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:27:36.927145 systemd-networkd[1377]: cilium_host: Gained carrier Feb 9 19:27:36.927383 systemd-networkd[1377]: cilium_net: Gained IPv6LL Feb 9 19:27:36.934378 systemd-networkd[1377]: cilium_host: Gained IPv6LL Feb 9 19:27:37.168664 systemd-networkd[1377]: cilium_vxlan: Link UP Feb 9 19:27:37.168673 systemd-networkd[1377]: cilium_vxlan: Gained carrier Feb 9 19:27:37.335124 kubelet[1745]: I0209 19:27:37.335087 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-l9mrv" podStartSLOduration=13.550093738 podCreationTimestamp="2024-02-09 19:27:14 +0000 UTC" firstStartedPulling="2024-02-09 19:27:16.692271196 +0000 UTC m=+3.947267402" lastFinishedPulling="2024-02-09 19:27:26.477195013 +0000 UTC m=+13.732191319" observedRunningTime="2024-02-09 19:27:35.595898623 +0000 UTC m=+22.850894929" watchObservedRunningTime="2024-02-09 19:27:37.335017655 +0000 UTC m=+24.590013961" Feb 9 19:27:37.335441 kubelet[1745]: I0209 19:27:37.335421 1745 topology_manager.go:215] "Topology Admit Handler" podUID="61438b07-09a9-4657-8693-ca03588174ba" podNamespace="default" podName="nginx-deployment-6d5f899847-wfkdk" Feb 9 19:27:37.340704 systemd[1]: Created slice kubepods-besteffort-pod61438b07_09a9_4657_8693_ca03588174ba.slice. Feb 9 19:27:37.389408 kubelet[1745]: E0209 19:27:37.389376 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:37.412261 kernel: NET: Registered PF_ALG protocol family Feb 9 19:27:37.448012 kubelet[1745]: I0209 19:27:37.447915 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbq2w\" (UniqueName: \"kubernetes.io/projected/61438b07-09a9-4657-8693-ca03588174ba-kube-api-access-lbq2w\") pod \"nginx-deployment-6d5f899847-wfkdk\" (UID: \"61438b07-09a9-4657-8693-ca03588174ba\") " pod="default/nginx-deployment-6d5f899847-wfkdk" Feb 9 19:27:37.644634 env[1263]: time="2024-02-09T19:27:37.644154588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-wfkdk,Uid:61438b07-09a9-4657-8693-ca03588174ba,Namespace:default,Attempt:0,}" Feb 9 19:27:38.136662 systemd-networkd[1377]: lxc_health: Link UP Feb 9 19:27:38.163032 systemd-networkd[1377]: lxc_health: Gained carrier Feb 9 19:27:38.163395 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:27:38.390200 kubelet[1745]: E0209 19:27:38.390069 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:38.708603 systemd-networkd[1377]: lxca038b31d16c8: Link UP Feb 9 19:27:38.715260 kernel: eth0: renamed from tmp17dae Feb 9 19:27:38.722603 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxca038b31d16c8: link becomes ready Feb 9 19:27:38.722417 systemd-networkd[1377]: lxca038b31d16c8: Gained carrier Feb 9 19:27:39.089513 systemd-networkd[1377]: cilium_vxlan: Gained IPv6LL Feb 9 19:27:39.390866 kubelet[1745]: E0209 19:27:39.390731 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:39.854495 systemd-networkd[1377]: lxca038b31d16c8: Gained IPv6LL Feb 9 19:27:40.174436 systemd-networkd[1377]: lxc_health: Gained IPv6LL Feb 9 19:27:40.391445 kubelet[1745]: E0209 19:27:40.391403 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:41.392754 kubelet[1745]: E0209 19:27:41.392707 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:42.205574 env[1263]: time="2024-02-09T19:27:42.205474304Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:42.206053 env[1263]: time="2024-02-09T19:27:42.205995806Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:42.206053 env[1263]: time="2024-02-09T19:27:42.206016206Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:42.206381 env[1263]: time="2024-02-09T19:27:42.206336508Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980 pid=2775 runtime=io.containerd.runc.v2 Feb 9 19:27:42.227107 systemd[1]: run-containerd-runc-k8s.io-17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980-runc.Ev84Yz.mount: Deactivated successfully. Feb 9 19:27:42.230564 systemd[1]: Started cri-containerd-17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980.scope. Feb 9 19:27:42.271907 env[1263]: time="2024-02-09T19:27:42.271861431Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-wfkdk,Uid:61438b07-09a9-4657-8693-ca03588174ba,Namespace:default,Attempt:0,} returns sandbox id \"17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980\"" Feb 9 19:27:42.273576 env[1263]: time="2024-02-09T19:27:42.273538740Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:27:42.394473 kubelet[1745]: E0209 19:27:42.394427 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:43.395518 kubelet[1745]: E0209 19:27:43.395485 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:44.396171 kubelet[1745]: E0209 19:27:44.396098 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:45.396887 kubelet[1745]: E0209 19:27:45.396847 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:46.397703 kubelet[1745]: E0209 19:27:46.397647 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:47.397872 kubelet[1745]: E0209 19:27:47.397825 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:47.467577 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2294429881.mount: Deactivated successfully. Feb 9 19:27:48.398016 kubelet[1745]: E0209 19:27:48.397969 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:48.422797 env[1263]: time="2024-02-09T19:27:48.422748339Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:48.427074 env[1263]: time="2024-02-09T19:27:48.427036159Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:48.431956 env[1263]: time="2024-02-09T19:27:48.431924081Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:48.435400 env[1263]: time="2024-02-09T19:27:48.435366697Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:48.435984 env[1263]: time="2024-02-09T19:27:48.435956000Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:27:48.438207 env[1263]: time="2024-02-09T19:27:48.438177610Z" level=info msg="CreateContainer within sandbox \"17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:27:48.465860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount515432526.mount: Deactivated successfully. Feb 9 19:27:48.481877 env[1263]: time="2024-02-09T19:27:48.481823310Z" level=info msg="CreateContainer within sandbox \"17dae6b81b2b4e373771930a83c8ef1fca365569e65ae1235d19259ce979f980\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"7ac8111c7e38a234658e272d5d50a78b6eded23711644b8e6e8152e3c79f8fa6\"" Feb 9 19:27:48.482382 env[1263]: time="2024-02-09T19:27:48.482304312Z" level=info msg="StartContainer for \"7ac8111c7e38a234658e272d5d50a78b6eded23711644b8e6e8152e3c79f8fa6\"" Feb 9 19:27:48.501613 systemd[1]: Started cri-containerd-7ac8111c7e38a234658e272d5d50a78b6eded23711644b8e6e8152e3c79f8fa6.scope. Feb 9 19:27:48.532853 env[1263]: time="2024-02-09T19:27:48.532812744Z" level=info msg="StartContainer for \"7ac8111c7e38a234658e272d5d50a78b6eded23711644b8e6e8152e3c79f8fa6\" returns successfully" Feb 9 19:27:48.617739 kubelet[1745]: I0209 19:27:48.617710 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-wfkdk" podStartSLOduration=5.454467469 podCreationTimestamp="2024-02-09 19:27:37 +0000 UTC" firstStartedPulling="2024-02-09 19:27:42.273040237 +0000 UTC m=+29.528036443" lastFinishedPulling="2024-02-09 19:27:48.436254501 +0000 UTC m=+35.691250707" observedRunningTime="2024-02-09 19:27:48.617426032 +0000 UTC m=+35.872422338" watchObservedRunningTime="2024-02-09 19:27:48.617681733 +0000 UTC m=+35.872678039" Feb 9 19:27:49.398277 kubelet[1745]: E0209 19:27:49.398200 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:50.398580 kubelet[1745]: E0209 19:27:50.398515 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:51.398992 kubelet[1745]: E0209 19:27:51.398930 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:52.399554 kubelet[1745]: E0209 19:27:52.399497 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:53.374768 kubelet[1745]: E0209 19:27:53.374708 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:53.399916 kubelet[1745]: E0209 19:27:53.399864 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:54.400330 kubelet[1745]: E0209 19:27:54.400278 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:54.557547 kubelet[1745]: I0209 19:27:54.557498 1745 topology_manager.go:215] "Topology Admit Handler" podUID="12001a56-c690-46cc-a648-b86234b75245" podNamespace="default" podName="nfs-server-provisioner-0" Feb 9 19:27:54.564096 systemd[1]: Created slice kubepods-besteffort-pod12001a56_c690_46cc_a648_b86234b75245.slice. Feb 9 19:27:54.653947 kubelet[1745]: I0209 19:27:54.653510 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/12001a56-c690-46cc-a648-b86234b75245-data\") pod \"nfs-server-provisioner-0\" (UID: \"12001a56-c690-46cc-a648-b86234b75245\") " pod="default/nfs-server-provisioner-0" Feb 9 19:27:54.653947 kubelet[1745]: I0209 19:27:54.653637 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b874k\" (UniqueName: \"kubernetes.io/projected/12001a56-c690-46cc-a648-b86234b75245-kube-api-access-b874k\") pod \"nfs-server-provisioner-0\" (UID: \"12001a56-c690-46cc-a648-b86234b75245\") " pod="default/nfs-server-provisioner-0" Feb 9 19:27:54.871778 env[1263]: time="2024-02-09T19:27:54.871716514Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:12001a56-c690-46cc-a648-b86234b75245,Namespace:default,Attempt:0,}" Feb 9 19:27:54.928504 systemd-networkd[1377]: lxc1f0b58c8d096: Link UP Feb 9 19:27:54.936323 kernel: eth0: renamed from tmp275d4 Feb 9 19:27:54.949588 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:27:54.949667 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc1f0b58c8d096: link becomes ready Feb 9 19:27:54.950335 systemd-networkd[1377]: lxc1f0b58c8d096: Gained carrier Feb 9 19:27:55.177134 env[1263]: time="2024-02-09T19:27:55.177061618Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:27:55.177350 env[1263]: time="2024-02-09T19:27:55.177097718Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:27:55.177350 env[1263]: time="2024-02-09T19:27:55.177111318Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:27:55.177350 env[1263]: time="2024-02-09T19:27:55.177267519Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/275d49aa63f44842473fbf6c24a157ed8cda609347b3e1b5661148a62195a6b5 pid=2903 runtime=io.containerd.runc.v2 Feb 9 19:27:55.195473 systemd[1]: Started cri-containerd-275d49aa63f44842473fbf6c24a157ed8cda609347b3e1b5661148a62195a6b5.scope. Feb 9 19:27:55.234871 env[1263]: time="2024-02-09T19:27:55.234824863Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:12001a56-c690-46cc-a648-b86234b75245,Namespace:default,Attempt:0,} returns sandbox id \"275d49aa63f44842473fbf6c24a157ed8cda609347b3e1b5661148a62195a6b5\"" Feb 9 19:27:55.236622 env[1263]: time="2024-02-09T19:27:55.236598771Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:27:55.401317 kubelet[1745]: E0209 19:27:55.401170 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:55.982582 systemd-networkd[1377]: lxc1f0b58c8d096: Gained IPv6LL Feb 9 19:27:56.402082 kubelet[1745]: E0209 19:27:56.402022 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:57.402645 kubelet[1745]: E0209 19:27:57.402570 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:57.882389 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2787330075.mount: Deactivated successfully. Feb 9 19:27:58.403008 kubelet[1745]: E0209 19:27:58.402941 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:59.403233 kubelet[1745]: E0209 19:27:59.403182 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:27:59.864972 env[1263]: time="2024-02-09T19:27:59.864923770Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:59.876447 env[1263]: time="2024-02-09T19:27:59.876406917Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:59.883276 env[1263]: time="2024-02-09T19:27:59.883241745Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:59.887371 env[1263]: time="2024-02-09T19:27:59.887340662Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:27:59.887966 env[1263]: time="2024-02-09T19:27:59.887933964Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:27:59.889976 env[1263]: time="2024-02-09T19:27:59.889943773Z" level=info msg="CreateContainer within sandbox \"275d49aa63f44842473fbf6c24a157ed8cda609347b3e1b5661148a62195a6b5\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:27:59.912660 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1504519156.mount: Deactivated successfully. Feb 9 19:27:59.928236 env[1263]: time="2024-02-09T19:27:59.928192229Z" level=info msg="CreateContainer within sandbox \"275d49aa63f44842473fbf6c24a157ed8cda609347b3e1b5661148a62195a6b5\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"8d2d62d6f1036b534691310a599969821a2a03e80958433146ab27ce790ba48c\"" Feb 9 19:27:59.928700 env[1263]: time="2024-02-09T19:27:59.928673931Z" level=info msg="StartContainer for \"8d2d62d6f1036b534691310a599969821a2a03e80958433146ab27ce790ba48c\"" Feb 9 19:27:59.950140 systemd[1]: Started cri-containerd-8d2d62d6f1036b534691310a599969821a2a03e80958433146ab27ce790ba48c.scope. Feb 9 19:27:59.981426 env[1263]: time="2024-02-09T19:27:59.981382847Z" level=info msg="StartContainer for \"8d2d62d6f1036b534691310a599969821a2a03e80958433146ab27ce790ba48c\" returns successfully" Feb 9 19:28:00.403988 kubelet[1745]: E0209 19:28:00.403927 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:00.648114 kubelet[1745]: I0209 19:28:00.648077 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.995836255 podCreationTimestamp="2024-02-09 19:27:54 +0000 UTC" firstStartedPulling="2024-02-09 19:27:55.235990368 +0000 UTC m=+42.490986574" lastFinishedPulling="2024-02-09 19:27:59.888196165 +0000 UTC m=+47.143192471" observedRunningTime="2024-02-09 19:28:00.647473749 +0000 UTC m=+47.902469955" watchObservedRunningTime="2024-02-09 19:28:00.648042152 +0000 UTC m=+47.903038458" Feb 9 19:28:01.404344 kubelet[1745]: E0209 19:28:01.404278 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:02.405359 kubelet[1745]: E0209 19:28:02.405300 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:03.406245 kubelet[1745]: E0209 19:28:03.406176 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:04.406439 kubelet[1745]: E0209 19:28:04.406379 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:05.407020 kubelet[1745]: E0209 19:28:05.406959 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:06.408106 kubelet[1745]: E0209 19:28:06.408046 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:07.409083 kubelet[1745]: E0209 19:28:07.409024 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:08.410258 kubelet[1745]: E0209 19:28:08.410166 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:09.410576 kubelet[1745]: E0209 19:28:09.410514 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:09.597753 kubelet[1745]: I0209 19:28:09.597701 1745 topology_manager.go:215] "Topology Admit Handler" podUID="dd6f7af8-91d1-4972-b3f3-5cc50c72804e" podNamespace="default" podName="test-pod-1" Feb 9 19:28:09.602823 systemd[1]: Created slice kubepods-besteffort-poddd6f7af8_91d1_4972_b3f3_5cc50c72804e.slice. Feb 9 19:28:09.748138 kubelet[1745]: I0209 19:28:09.747699 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-25cbc3af-b28e-44e6-967c-8f28883610e2\" (UniqueName: \"kubernetes.io/nfs/dd6f7af8-91d1-4972-b3f3-5cc50c72804e-pvc-25cbc3af-b28e-44e6-967c-8f28883610e2\") pod \"test-pod-1\" (UID: \"dd6f7af8-91d1-4972-b3f3-5cc50c72804e\") " pod="default/test-pod-1" Feb 9 19:28:09.748138 kubelet[1745]: I0209 19:28:09.747769 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9d6zs\" (UniqueName: \"kubernetes.io/projected/dd6f7af8-91d1-4972-b3f3-5cc50c72804e-kube-api-access-9d6zs\") pod \"test-pod-1\" (UID: \"dd6f7af8-91d1-4972-b3f3-5cc50c72804e\") " pod="default/test-pod-1" Feb 9 19:28:10.111335 kernel: FS-Cache: Loaded Feb 9 19:28:10.232614 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:28:10.232751 kernel: RPC: Registered udp transport module. Feb 9 19:28:10.232778 kernel: RPC: Registered tcp transport module. Feb 9 19:28:10.237826 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:28:10.410884 kubelet[1745]: E0209 19:28:10.410767 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:10.501380 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:28:10.740254 kernel: NFS: Registering the id_resolver key type Feb 9 19:28:10.740406 kernel: Key type id_resolver registered Feb 9 19:28:10.742333 kernel: Key type id_legacy registered Feb 9 19:28:11.126761 nfsidmap[3017]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-75193cbbcb' Feb 9 19:28:11.143790 nfsidmap[3018]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-75193cbbcb' Feb 9 19:28:11.407148 env[1263]: time="2024-02-09T19:28:11.407002767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd6f7af8-91d1-4972-b3f3-5cc50c72804e,Namespace:default,Attempt:0,}" Feb 9 19:28:11.411126 kubelet[1745]: E0209 19:28:11.411097 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:11.471651 systemd-networkd[1377]: lxc447bceb6fecf: Link UP Feb 9 19:28:11.476257 kernel: eth0: renamed from tmpf5107 Feb 9 19:28:11.489911 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:28:11.489999 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc447bceb6fecf: link becomes ready Feb 9 19:28:11.490108 systemd-networkd[1377]: lxc447bceb6fecf: Gained carrier Feb 9 19:28:11.710783 env[1263]: time="2024-02-09T19:28:11.710601297Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:11.711040 env[1263]: time="2024-02-09T19:28:11.710992098Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:11.711183 env[1263]: time="2024-02-09T19:28:11.711158599Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:11.711545 env[1263]: time="2024-02-09T19:28:11.711495300Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f5107ff7245783b676dd1a9fe5434bba28082d31acc26973874e54c8641c5285 pid=3044 runtime=io.containerd.runc.v2 Feb 9 19:28:11.724515 systemd[1]: Started cri-containerd-f5107ff7245783b676dd1a9fe5434bba28082d31acc26973874e54c8641c5285.scope. Feb 9 19:28:11.766173 env[1263]: time="2024-02-09T19:28:11.766129603Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:dd6f7af8-91d1-4972-b3f3-5cc50c72804e,Namespace:default,Attempt:0,} returns sandbox id \"f5107ff7245783b676dd1a9fe5434bba28082d31acc26973874e54c8641c5285\"" Feb 9 19:28:11.767817 env[1263]: time="2024-02-09T19:28:11.767776409Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:28:12.326123 env[1263]: time="2024-02-09T19:28:12.326069778Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:12.331353 env[1263]: time="2024-02-09T19:28:12.331317597Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:12.335473 env[1263]: time="2024-02-09T19:28:12.335441613Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:12.338460 env[1263]: time="2024-02-09T19:28:12.338428124Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:12.339021 env[1263]: time="2024-02-09T19:28:12.338991726Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:28:12.341088 env[1263]: time="2024-02-09T19:28:12.341059533Z" level=info msg="CreateContainer within sandbox \"f5107ff7245783b676dd1a9fe5434bba28082d31acc26973874e54c8641c5285\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:28:12.379169 env[1263]: time="2024-02-09T19:28:12.379118674Z" level=info msg="CreateContainer within sandbox \"f5107ff7245783b676dd1a9fe5434bba28082d31acc26973874e54c8641c5285\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"00c38be44d540af652b17780ecaafe09293891d61ad2a24e5b84e6fa51300918\"" Feb 9 19:28:12.379899 env[1263]: time="2024-02-09T19:28:12.379865477Z" level=info msg="StartContainer for \"00c38be44d540af652b17780ecaafe09293891d61ad2a24e5b84e6fa51300918\"" Feb 9 19:28:12.405064 systemd[1]: Started cri-containerd-00c38be44d540af652b17780ecaafe09293891d61ad2a24e5b84e6fa51300918.scope. Feb 9 19:28:12.413668 kubelet[1745]: E0209 19:28:12.413375 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:12.440099 env[1263]: time="2024-02-09T19:28:12.440058099Z" level=info msg="StartContainer for \"00c38be44d540af652b17780ecaafe09293891d61ad2a24e5b84e6fa51300918\" returns successfully" Feb 9 19:28:12.673435 kubelet[1745]: I0209 19:28:12.673402 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=17.101321341 podCreationTimestamp="2024-02-09 19:27:55 +0000 UTC" firstStartedPulling="2024-02-09 19:28:11.767258407 +0000 UTC m=+59.022254613" lastFinishedPulling="2024-02-09 19:28:12.339304727 +0000 UTC m=+59.594301033" observedRunningTime="2024-02-09 19:28:12.671811755 +0000 UTC m=+59.926807961" watchObservedRunningTime="2024-02-09 19:28:12.673367761 +0000 UTC m=+59.928363967" Feb 9 19:28:12.878567 systemd-networkd[1377]: lxc447bceb6fecf: Gained IPv6LL Feb 9 19:28:13.374440 kubelet[1745]: E0209 19:28:13.374385 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:13.413964 kubelet[1745]: E0209 19:28:13.413926 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:14.414767 kubelet[1745]: E0209 19:28:14.414711 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:15.415236 kubelet[1745]: E0209 19:28:15.415161 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:16.416248 kubelet[1745]: E0209 19:28:16.416181 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:17.416699 kubelet[1745]: E0209 19:28:17.416645 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:18.417630 kubelet[1745]: E0209 19:28:18.417570 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:18.683760 env[1263]: time="2024-02-09T19:28:18.683383221Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:28:18.688720 env[1263]: time="2024-02-09T19:28:18.688682340Z" level=info msg="StopContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" with timeout 2 (s)" Feb 9 19:28:18.688953 env[1263]: time="2024-02-09T19:28:18.688922441Z" level=info msg="Stop container \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" with signal terminated" Feb 9 19:28:18.695688 systemd-networkd[1377]: lxc_health: Link DOWN Feb 9 19:28:18.695697 systemd-networkd[1377]: lxc_health: Lost carrier Feb 9 19:28:18.715577 systemd[1]: cri-containerd-5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c.scope: Deactivated successfully. Feb 9 19:28:18.715886 systemd[1]: cri-containerd-5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c.scope: Consumed 6.701s CPU time. Feb 9 19:28:18.736286 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c-rootfs.mount: Deactivated successfully. Feb 9 19:28:19.418058 kubelet[1745]: E0209 19:28:19.418001 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:20.418565 kubelet[1745]: E0209 19:28:20.418514 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:20.698978 env[1263]: time="2024-02-09T19:28:20.698850047Z" level=info msg="Kill container \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\"" Feb 9 19:28:21.419029 kubelet[1745]: E0209 19:28:21.418975 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:21.707802 env[1263]: time="2024-02-09T19:28:21.707636184Z" level=info msg="shim disconnected" id=5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c Feb 9 19:28:21.707802 env[1263]: time="2024-02-09T19:28:21.707697684Z" level=warning msg="cleaning up after shim disconnected" id=5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c namespace=k8s.io Feb 9 19:28:21.707802 env[1263]: time="2024-02-09T19:28:21.707712184Z" level=info msg="cleaning up dead shim" Feb 9 19:28:21.716490 env[1263]: time="2024-02-09T19:28:21.716450714Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3182 runtime=io.containerd.runc.v2\n" Feb 9 19:28:21.722286 env[1263]: time="2024-02-09T19:28:21.722178835Z" level=info msg="StopContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" returns successfully" Feb 9 19:28:21.722925 env[1263]: time="2024-02-09T19:28:21.722895537Z" level=info msg="StopPodSandbox for \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\"" Feb 9 19:28:21.723018 env[1263]: time="2024-02-09T19:28:21.722955037Z" level=info msg="Container to stop \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:21.723018 env[1263]: time="2024-02-09T19:28:21.722974137Z" level=info msg="Container to stop \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:21.723018 env[1263]: time="2024-02-09T19:28:21.722989537Z" level=info msg="Container to stop \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:21.723018 env[1263]: time="2024-02-09T19:28:21.723004637Z" level=info msg="Container to stop \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:21.725480 env[1263]: time="2024-02-09T19:28:21.723018937Z" level=info msg="Container to stop \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:28:21.725125 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995-shm.mount: Deactivated successfully. Feb 9 19:28:21.732072 systemd[1]: cri-containerd-2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995.scope: Deactivated successfully. Feb 9 19:28:21.749059 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995-rootfs.mount: Deactivated successfully. Feb 9 19:28:21.765300 env[1263]: time="2024-02-09T19:28:21.765256485Z" level=info msg="shim disconnected" id=2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995 Feb 9 19:28:21.765495 env[1263]: time="2024-02-09T19:28:21.765474086Z" level=warning msg="cleaning up after shim disconnected" id=2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995 namespace=k8s.io Feb 9 19:28:21.765618 env[1263]: time="2024-02-09T19:28:21.765493686Z" level=info msg="cleaning up dead shim" Feb 9 19:28:21.773208 env[1263]: time="2024-02-09T19:28:21.773176013Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3212 runtime=io.containerd.runc.v2\n" Feb 9 19:28:21.773480 env[1263]: time="2024-02-09T19:28:21.773451214Z" level=info msg="TearDown network for sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" successfully" Feb 9 19:28:21.773480 env[1263]: time="2024-02-09T19:28:21.773476314Z" level=info msg="StopPodSandbox for \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" returns successfully" Feb 9 19:28:21.924410 kubelet[1745]: I0209 19:28:21.924362 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-bpf-maps\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.924636 kubelet[1745]: I0209 19:28:21.924459 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-net\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.924636 kubelet[1745]: I0209 19:28:21.924379 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.924636 kubelet[1745]: I0209 19:28:21.924547 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-hubble-tls\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.924636 kubelet[1745]: I0209 19:28:21.924595 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.924935 kubelet[1745]: I0209 19:28:21.924914 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c0c2e2-7189-40b8-b764-055e8c766bde-clustermesh-secrets\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925088 kubelet[1745]: I0209 19:28:21.925072 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjnzq\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-kube-api-access-cjnzq\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925239 kubelet[1745]: I0209 19:28:21.925210 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cni-path\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925365 kubelet[1745]: I0209 19:28:21.925351 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-hostproc\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925485 kubelet[1745]: I0209 19:28:21.925472 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-cgroup\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925607 kubelet[1745]: I0209 19:28:21.925592 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-etc-cni-netd\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925729 kubelet[1745]: I0209 19:28:21.925714 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-xtables-lock\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925861 kubelet[1745]: I0209 19:28:21.925851 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-run\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.925948 kubelet[1745]: I0209 19:28:21.925939 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-lib-modules\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.926045 kubelet[1745]: I0209 19:28:21.926035 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-config-path\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.926139 kubelet[1745]: I0209 19:28:21.926130 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-kernel\") pod \"a6c0c2e2-7189-40b8-b764-055e8c766bde\" (UID: \"a6c0c2e2-7189-40b8-b764-055e8c766bde\") " Feb 9 19:28:21.926268 kubelet[1745]: I0209 19:28:21.926255 1745 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-bpf-maps\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:21.926367 kubelet[1745]: I0209 19:28:21.926357 1745 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-net\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:21.926473 kubelet[1745]: I0209 19:28:21.926457 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.926576 kubelet[1745]: I0209 19:28:21.926557 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cni-path" (OuterVolumeSpecName: "cni-path") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.926670 kubelet[1745]: I0209 19:28:21.926656 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-hostproc" (OuterVolumeSpecName: "hostproc") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.926756 kubelet[1745]: I0209 19:28:21.926743 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.926843 kubelet[1745]: I0209 19:28:21.926831 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.926925 kubelet[1745]: I0209 19:28:21.926913 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.927007 kubelet[1745]: I0209 19:28:21.926993 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.927098 kubelet[1745]: I0209 19:28:21.927083 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:21.930915 systemd[1]: var-lib-kubelet-pods-a6c0c2e2\x2d7189\x2d40b8\x2db764\x2d055e8c766bde-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:28:21.932204 kubelet[1745]: I0209 19:28:21.932162 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:21.933151 kubelet[1745]: I0209 19:28:21.933117 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:28:21.936781 systemd[1]: var-lib-kubelet-pods-a6c0c2e2\x2d7189\x2d40b8\x2db764\x2d055e8c766bde-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:21.937932 kubelet[1745]: I0209 19:28:21.937909 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6c0c2e2-7189-40b8-b764-055e8c766bde-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:21.939595 kubelet[1745]: I0209 19:28:21.939569 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-kube-api-access-cjnzq" (OuterVolumeSpecName: "kube-api-access-cjnzq") pod "a6c0c2e2-7189-40b8-b764-055e8c766bde" (UID: "a6c0c2e2-7189-40b8-b764-055e8c766bde"). InnerVolumeSpecName "kube-api-access-cjnzq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:21.940507 systemd[1]: var-lib-kubelet-pods-a6c0c2e2\x2d7189\x2d40b8\x2db764\x2d055e8c766bde-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dcjnzq.mount: Deactivated successfully. Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027102 1745 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-hubble-tls\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027154 1745 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/a6c0c2e2-7189-40b8-b764-055e8c766bde-clustermesh-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027172 1745 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-cjnzq\" (UniqueName: \"kubernetes.io/projected/a6c0c2e2-7189-40b8-b764-055e8c766bde-kube-api-access-cjnzq\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027195 1745 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cni-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027213 1745 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-hostproc\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027249 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-cgroup\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027265 1745 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-etc-cni-netd\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028315 kubelet[1745]: I0209 19:28:22.027280 1745 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-xtables-lock\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028854 kubelet[1745]: I0209 19:28:22.027295 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-run\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028854 kubelet[1745]: I0209 19:28:22.027310 1745 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-lib-modules\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028854 kubelet[1745]: I0209 19:28:22.027328 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/a6c0c2e2-7189-40b8-b764-055e8c766bde-cilium-config-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.028854 kubelet[1745]: I0209 19:28:22.027346 1745 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/a6c0c2e2-7189-40b8-b764-055e8c766bde-host-proc-sys-kernel\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:22.419944 kubelet[1745]: E0209 19:28:22.419892 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:22.631339 kubelet[1745]: I0209 19:28:22.631300 1745 topology_manager.go:215] "Topology Admit Handler" podUID="015218a3-e107-4079-85a9-93c655ff544f" podNamespace="kube-system" podName="cilium-5kvb5" Feb 9 19:28:22.631575 kubelet[1745]: E0209 19:28:22.631542 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="mount-cgroup" Feb 9 19:28:22.631575 kubelet[1745]: E0209 19:28:22.631567 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="apply-sysctl-overwrites" Feb 9 19:28:22.631756 kubelet[1745]: E0209 19:28:22.631580 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="mount-bpf-fs" Feb 9 19:28:22.631756 kubelet[1745]: E0209 19:28:22.631591 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="clean-cilium-state" Feb 9 19:28:22.631756 kubelet[1745]: E0209 19:28:22.631602 1745 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="cilium-agent" Feb 9 19:28:22.631756 kubelet[1745]: I0209 19:28:22.631633 1745 memory_manager.go:346] "RemoveStaleState removing state" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" containerName="cilium-agent" Feb 9 19:28:22.637339 systemd[1]: Created slice kubepods-burstable-pod015218a3_e107_4079_85a9_93c655ff544f.slice. Feb 9 19:28:22.639517 kubelet[1745]: W0209 19:28:22.639493 1745 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.8.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.10' and this object Feb 9 19:28:22.639713 kubelet[1745]: E0209 19:28:22.639697 1745 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.8.10" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.10' and this object Feb 9 19:28:22.643975 kubelet[1745]: I0209 19:28:22.643952 1745 topology_manager.go:215] "Topology Admit Handler" podUID="d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-wblnx" Feb 9 19:28:22.653290 systemd[1]: Created slice kubepods-besteffort-podd5cb8c50_014d_4afe_aa4a_1f3b7435bdb6.slice. Feb 9 19:28:22.684539 kubelet[1745]: I0209 19:28:22.683801 1745 scope.go:117] "RemoveContainer" containerID="5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c" Feb 9 19:28:22.686144 env[1263]: time="2024-02-09T19:28:22.685794795Z" level=info msg="RemoveContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\"" Feb 9 19:28:22.688322 systemd[1]: Removed slice kubepods-burstable-poda6c0c2e2_7189_40b8_b764_055e8c766bde.slice. Feb 9 19:28:22.688449 systemd[1]: kubepods-burstable-poda6c0c2e2_7189_40b8_b764_055e8c766bde.slice: Consumed 6.790s CPU time. Feb 9 19:28:22.692491 env[1263]: time="2024-02-09T19:28:22.692451418Z" level=info msg="RemoveContainer for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" returns successfully" Feb 9 19:28:22.692836 kubelet[1745]: I0209 19:28:22.692814 1745 scope.go:117] "RemoveContainer" containerID="bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9" Feb 9 19:28:22.693812 env[1263]: time="2024-02-09T19:28:22.693779023Z" level=info msg="RemoveContainer for \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\"" Feb 9 19:28:22.701317 env[1263]: time="2024-02-09T19:28:22.701285149Z" level=info msg="RemoveContainer for \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\" returns successfully" Feb 9 19:28:22.701469 kubelet[1745]: I0209 19:28:22.701452 1745 scope.go:117] "RemoveContainer" containerID="cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd" Feb 9 19:28:22.702372 env[1263]: time="2024-02-09T19:28:22.702340453Z" level=info msg="RemoveContainer for \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\"" Feb 9 19:28:22.708709 env[1263]: time="2024-02-09T19:28:22.708677975Z" level=info msg="RemoveContainer for \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\" returns successfully" Feb 9 19:28:22.709024 kubelet[1745]: I0209 19:28:22.708855 1745 scope.go:117] "RemoveContainer" containerID="2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601" Feb 9 19:28:22.709747 env[1263]: time="2024-02-09T19:28:22.709721678Z" level=info msg="RemoveContainer for \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\"" Feb 9 19:28:22.716596 env[1263]: time="2024-02-09T19:28:22.716564002Z" level=info msg="RemoveContainer for \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\" returns successfully" Feb 9 19:28:22.716711 kubelet[1745]: I0209 19:28:22.716695 1745 scope.go:117] "RemoveContainer" containerID="433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7" Feb 9 19:28:22.717551 env[1263]: time="2024-02-09T19:28:22.717525006Z" level=info msg="RemoveContainer for \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\"" Feb 9 19:28:22.726038 env[1263]: time="2024-02-09T19:28:22.726005535Z" level=info msg="RemoveContainer for \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\" returns successfully" Feb 9 19:28:22.726185 kubelet[1745]: I0209 19:28:22.726153 1745 scope.go:117] "RemoveContainer" containerID="5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c" Feb 9 19:28:22.726443 env[1263]: time="2024-02-09T19:28:22.726365136Z" level=error msg="ContainerStatus for \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\": not found" Feb 9 19:28:22.726576 kubelet[1745]: E0209 19:28:22.726558 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\": not found" containerID="5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c" Feb 9 19:28:22.726676 kubelet[1745]: I0209 19:28:22.726661 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c"} err="failed to get container status \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\": rpc error: code = NotFound desc = an error occurred when try to find container \"5b1caf3b8ec273af167de8ab95919de32f4503154902aec56a9b210adcfc109c\": not found" Feb 9 19:28:22.726743 kubelet[1745]: I0209 19:28:22.726679 1745 scope.go:117] "RemoveContainer" containerID="bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9" Feb 9 19:28:22.726925 env[1263]: time="2024-02-09T19:28:22.726877138Z" level=error msg="ContainerStatus for \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\": not found" Feb 9 19:28:22.727055 kubelet[1745]: E0209 19:28:22.727033 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\": not found" containerID="bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9" Feb 9 19:28:22.727129 kubelet[1745]: I0209 19:28:22.727076 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9"} err="failed to get container status \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\": rpc error: code = NotFound desc = an error occurred when try to find container \"bdb0dc5a21f2881f717b0ffaeb394293a03ea0cf00ab8d0c0f6643ba4b08dfb9\": not found" Feb 9 19:28:22.727129 kubelet[1745]: I0209 19:28:22.727090 1745 scope.go:117] "RemoveContainer" containerID="cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd" Feb 9 19:28:22.727311 env[1263]: time="2024-02-09T19:28:22.727266239Z" level=error msg="ContainerStatus for \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\": not found" Feb 9 19:28:22.727428 kubelet[1745]: E0209 19:28:22.727412 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\": not found" containerID="cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd" Feb 9 19:28:22.727498 kubelet[1745]: I0209 19:28:22.727447 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd"} err="failed to get container status \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\": rpc error: code = NotFound desc = an error occurred when try to find container \"cf17adf607e85664da685784c0e91fec06b17988848df8c03ba5ed30025b58cd\": not found" Feb 9 19:28:22.727498 kubelet[1745]: I0209 19:28:22.727460 1745 scope.go:117] "RemoveContainer" containerID="2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601" Feb 9 19:28:22.727675 env[1263]: time="2024-02-09T19:28:22.727621341Z" level=error msg="ContainerStatus for \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\": not found" Feb 9 19:28:22.727793 kubelet[1745]: E0209 19:28:22.727773 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\": not found" containerID="2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601" Feb 9 19:28:22.727860 kubelet[1745]: I0209 19:28:22.727805 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601"} err="failed to get container status \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\": rpc error: code = NotFound desc = an error occurred when try to find container \"2d1c3a475a364edf136e45d54642fc686ab1d4a2c8ca781d027dde4fa0468601\": not found" Feb 9 19:28:22.727860 kubelet[1745]: I0209 19:28:22.727818 1745 scope.go:117] "RemoveContainer" containerID="433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7" Feb 9 19:28:22.728018 env[1263]: time="2024-02-09T19:28:22.727972642Z" level=error msg="ContainerStatus for \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\": not found" Feb 9 19:28:22.728135 kubelet[1745]: E0209 19:28:22.728118 1745 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\": not found" containerID="433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7" Feb 9 19:28:22.728200 kubelet[1745]: I0209 19:28:22.728158 1745 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7"} err="failed to get container status \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\": rpc error: code = NotFound desc = an error occurred when try to find container \"433cd6a6cfed7ccfcd3a4325feb2139ee297a479fbab89bedc7e48b3270f36b7\": not found" Feb 9 19:28:22.731449 kubelet[1745]: I0209 19:28:22.731431 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-bpf-maps\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731537 kubelet[1745]: I0209 19:28:22.731468 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-lib-modules\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731537 kubelet[1745]: I0209 19:28:22.731497 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-xtables-lock\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731537 kubelet[1745]: I0209 19:28:22.731525 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-hubble-tls\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731665 kubelet[1745]: I0209 19:28:22.731552 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-run\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731665 kubelet[1745]: I0209 19:28:22.731583 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xc5rk\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-kube-api-access-xc5rk\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731665 kubelet[1745]: I0209 19:28:22.731614 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-cgroup\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731665 kubelet[1745]: I0209 19:28:22.731644 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cni-path\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731829 kubelet[1745]: I0209 19:28:22.731676 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-net\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731829 kubelet[1745]: I0209 19:28:22.731703 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-hostproc\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731829 kubelet[1745]: I0209 19:28:22.731733 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-clustermesh-secrets\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731829 kubelet[1745]: I0209 19:28:22.731761 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015218a3-e107-4079-85a9-93c655ff544f-cilium-config-path\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.731829 kubelet[1745]: I0209 19:28:22.731793 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.732028 kubelet[1745]: I0209 19:28:22.731821 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-kernel\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.732028 kubelet[1745]: I0209 19:28:22.731849 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-etc-cni-netd\") pod \"cilium-5kvb5\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " pod="kube-system/cilium-5kvb5" Feb 9 19:28:22.833682 kubelet[1745]: I0209 19:28:22.833645 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qwpdz\" (UniqueName: \"kubernetes.io/projected/d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6-kube-api-access-qwpdz\") pod \"cilium-operator-6bc8ccdb58-wblnx\" (UID: \"d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6\") " pod="kube-system/cilium-operator-6bc8ccdb58-wblnx" Feb 9 19:28:22.834050 kubelet[1745]: I0209 19:28:22.834030 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-wblnx\" (UID: \"d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6\") " pod="kube-system/cilium-operator-6bc8ccdb58-wblnx" Feb 9 19:28:22.956349 env[1263]: time="2024-02-09T19:28:22.956245237Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wblnx,Uid:d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:22.997805 env[1263]: time="2024-02-09T19:28:22.997736981Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:22.997993 env[1263]: time="2024-02-09T19:28:22.997775581Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:22.997993 env[1263]: time="2024-02-09T19:28:22.997793681Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:22.997993 env[1263]: time="2024-02-09T19:28:22.997922782Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363 pid=3240 runtime=io.containerd.runc.v2 Feb 9 19:28:23.013770 systemd[1]: Started cri-containerd-8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363.scope. Feb 9 19:28:23.054409 env[1263]: time="2024-02-09T19:28:23.054375578Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-wblnx,Uid:d5cb8c50-014d-4afe-aa4a-1f3b7435bdb6,Namespace:kube-system,Attempt:0,} returns sandbox id \"8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363\"" Feb 9 19:28:23.056058 env[1263]: time="2024-02-09T19:28:23.055998583Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:28:23.420702 kubelet[1745]: E0209 19:28:23.420665 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:23.462066 kubelet[1745]: E0209 19:28:23.462031 1745 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:28:23.524199 kubelet[1745]: I0209 19:28:23.524168 1745 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="a6c0c2e2-7189-40b8-b764-055e8c766bde" path="/var/lib/kubelet/pods/a6c0c2e2-7189-40b8-b764-055e8c766bde/volumes" Feb 9 19:28:23.834571 kubelet[1745]: E0209 19:28:23.834531 1745 secret.go:194] Couldn't get secret kube-system/cilium-ipsec-keys: failed to sync secret cache: timed out waiting for the condition Feb 9 19:28:23.834757 kubelet[1745]: E0209 19:28:23.834650 1745 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets podName:015218a3-e107-4079-85a9-93c655ff544f nodeName:}" failed. No retries permitted until 2024-02-09 19:28:24.334619481 +0000 UTC m=+71.589615787 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cilium-ipsec-secrets" (UniqueName: "kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets") pod "cilium-5kvb5" (UID: "015218a3-e107-4079-85a9-93c655ff544f") : failed to sync secret cache: timed out waiting for the condition Feb 9 19:28:23.858649 kubelet[1745]: E0209 19:28:23.858622 1745 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[cilium-ipsec-secrets], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-5kvb5" podUID="015218a3-e107-4079-85a9-93c655ff544f" Feb 9 19:28:24.421744 kubelet[1745]: E0209 19:28:24.421704 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.848861 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015218a3-e107-4079-85a9-93c655ff544f-cilium-config-path\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.848921 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-kernel\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.848948 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-etc-cni-netd\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.848975 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-bpf-maps\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.848999 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-xtables-lock\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850258 kubelet[1745]: I0209 19:28:24.849023 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-hostproc\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849058 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-hubble-tls\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849084 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-lib-modules\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849114 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-clustermesh-secrets\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849144 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849170 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-run\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850649 kubelet[1745]: I0209 19:28:24.849201 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xc5rk\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-kube-api-access-xc5rk\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850898 kubelet[1745]: I0209 19:28:24.849237 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-cgroup\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850898 kubelet[1745]: I0209 19:28:24.849264 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cni-path\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850898 kubelet[1745]: I0209 19:28:24.849290 1745 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-net\") pod \"015218a3-e107-4079-85a9-93c655ff544f\" (UID: \"015218a3-e107-4079-85a9-93c655ff544f\") " Feb 9 19:28:24.850898 kubelet[1745]: I0209 19:28:24.849359 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.851948 kubelet[1745]: I0209 19:28:24.851553 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.851948 kubelet[1745]: I0209 19:28:24.851900 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.853313 kubelet[1745]: I0209 19:28:24.853292 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/015218a3-e107-4079-85a9-93c655ff544f-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:28:24.853444 kubelet[1745]: I0209 19:28:24.853429 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.853549 kubelet[1745]: I0209 19:28:24.853536 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.853647 kubelet[1745]: I0209 19:28:24.853634 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.853746 kubelet[1745]: I0209 19:28:24.853734 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-hostproc" (OuterVolumeSpecName: "hostproc") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.854451 kubelet[1745]: I0209 19:28:24.854430 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.854584 kubelet[1745]: I0209 19:28:24.854569 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.854682 kubelet[1745]: I0209 19:28:24.854669 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cni-path" (OuterVolumeSpecName: "cni-path") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:28:24.860233 systemd[1]: var-lib-kubelet-pods-015218a3\x2de107\x2d4079\x2d85a9\x2d93c655ff544f-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:24.862929 kubelet[1745]: I0209 19:28:24.862903 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:24.864914 systemd[1]: var-lib-kubelet-pods-015218a3\x2de107\x2d4079\x2d85a9\x2d93c655ff544f-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxc5rk.mount: Deactivated successfully. Feb 9 19:28:24.871048 systemd[1]: var-lib-kubelet-pods-015218a3\x2de107\x2d4079\x2d85a9\x2d93c655ff544f-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:28:24.872065 kubelet[1745]: I0209 19:28:24.872044 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-kube-api-access-xc5rk" (OuterVolumeSpecName: "kube-api-access-xc5rk") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "kube-api-access-xc5rk". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:24.873380 kubelet[1745]: I0209 19:28:24.873358 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:28:24.875922 systemd[1]: var-lib-kubelet-pods-015218a3\x2de107\x2d4079\x2d85a9\x2d93c655ff544f-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 9 19:28:24.877196 kubelet[1745]: I0209 19:28:24.877173 1745 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "015218a3-e107-4079-85a9-93c655ff544f" (UID: "015218a3-e107-4079-85a9-93c655ff544f"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:28:24.950320 kubelet[1745]: I0209 19:28:24.950283 1745 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-lib-modules\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950553 kubelet[1745]: I0209 19:28:24.950540 1745 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-clustermesh-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950659 kubelet[1745]: I0209 19:28:24.950651 1745 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cni-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950744 kubelet[1745]: I0209 19:28:24.950737 1745 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-net\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950821 kubelet[1745]: I0209 19:28:24.950810 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/015218a3-e107-4079-85a9-93c655ff544f-cilium-ipsec-secrets\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950895 kubelet[1745]: I0209 19:28:24.950888 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-run\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.950973 kubelet[1745]: I0209 19:28:24.950966 1745 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-xc5rk\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-kube-api-access-xc5rk\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951054 kubelet[1745]: I0209 19:28:24.951045 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-cilium-cgroup\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951134 kubelet[1745]: I0209 19:28:24.951127 1745 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-hostproc\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951208 kubelet[1745]: I0209 19:28:24.951201 1745 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/015218a3-e107-4079-85a9-93c655ff544f-cilium-config-path\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951308 kubelet[1745]: I0209 19:28:24.951300 1745 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-host-proc-sys-kernel\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951390 kubelet[1745]: I0209 19:28:24.951382 1745 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-etc-cni-netd\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951465 kubelet[1745]: I0209 19:28:24.951456 1745 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-bpf-maps\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951542 kubelet[1745]: I0209 19:28:24.951535 1745 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/015218a3-e107-4079-85a9-93c655ff544f-xtables-lock\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:24.951626 kubelet[1745]: I0209 19:28:24.951619 1745 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/015218a3-e107-4079-85a9-93c655ff544f-hubble-tls\") on node \"10.200.8.10\" DevicePath \"\"" Feb 9 19:28:25.267118 env[1263]: time="2024-02-09T19:28:25.266259214Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:25.272104 env[1263]: time="2024-02-09T19:28:25.272059634Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:25.275988 env[1263]: time="2024-02-09T19:28:25.275955848Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:28:25.276397 env[1263]: time="2024-02-09T19:28:25.276366649Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:28:25.278432 env[1263]: time="2024-02-09T19:28:25.278403756Z" level=info msg="CreateContainer within sandbox \"8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:28:25.307497 env[1263]: time="2024-02-09T19:28:25.307457856Z" level=info msg="CreateContainer within sandbox \"8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32\"" Feb 9 19:28:25.308172 env[1263]: time="2024-02-09T19:28:25.308135158Z" level=info msg="StartContainer for \"5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32\"" Feb 9 19:28:25.324776 systemd[1]: Started cri-containerd-5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32.scope. Feb 9 19:28:25.353528 env[1263]: time="2024-02-09T19:28:25.353480214Z" level=info msg="StartContainer for \"5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32\" returns successfully" Feb 9 19:28:25.422698 kubelet[1745]: E0209 19:28:25.422656 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:25.525789 systemd[1]: Removed slice kubepods-burstable-pod015218a3_e107_4079_85a9_93c655ff544f.slice. Feb 9 19:28:25.714896 kubelet[1745]: I0209 19:28:25.714864 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-wblnx" podStartSLOduration=1.493775885 podCreationTimestamp="2024-02-09 19:28:22 +0000 UTC" firstStartedPulling="2024-02-09 19:28:23.055618282 +0000 UTC m=+70.310614588" lastFinishedPulling="2024-02-09 19:28:25.27665735 +0000 UTC m=+72.531653556" observedRunningTime="2024-02-09 19:28:25.714694153 +0000 UTC m=+72.969690359" watchObservedRunningTime="2024-02-09 19:28:25.714814853 +0000 UTC m=+72.969811159" Feb 9 19:28:25.781765 kubelet[1745]: I0209 19:28:25.781640 1745 topology_manager.go:215] "Topology Admit Handler" podUID="228b7947-547a-41c1-a9c7-774fb4433cc4" podNamespace="kube-system" podName="cilium-tlvwg" Feb 9 19:28:25.786885 systemd[1]: Created slice kubepods-burstable-pod228b7947_547a_41c1_a9c7_774fb4433cc4.slice. Feb 9 19:28:25.862290 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2584025177.mount: Deactivated successfully. Feb 9 19:28:25.955877 kubelet[1745]: I0209 19:28:25.955825 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-cilium-run\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.955901 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-xtables-lock\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.955937 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-host-proc-sys-kernel\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.955971 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-lib-modules\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.956003 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9qlp7\" (UniqueName: \"kubernetes.io/projected/228b7947-547a-41c1-a9c7-774fb4433cc4-kube-api-access-9qlp7\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.956041 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-host-proc-sys-net\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956083 kubelet[1745]: I0209 19:28:25.956071 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/228b7947-547a-41c1-a9c7-774fb4433cc4-hubble-tls\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956106 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-bpf-maps\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956142 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-cilium-cgroup\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956245 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-cni-path\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956282 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/228b7947-547a-41c1-a9c7-774fb4433cc4-cilium-ipsec-secrets\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956333 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-hostproc\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956613 kubelet[1745]: I0209 19:28:25.956372 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/228b7947-547a-41c1-a9c7-774fb4433cc4-etc-cni-netd\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956849 kubelet[1745]: I0209 19:28:25.956423 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/228b7947-547a-41c1-a9c7-774fb4433cc4-clustermesh-secrets\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:25.956849 kubelet[1745]: I0209 19:28:25.956465 1745 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/228b7947-547a-41c1-a9c7-774fb4433cc4-cilium-config-path\") pod \"cilium-tlvwg\" (UID: \"228b7947-547a-41c1-a9c7-774fb4433cc4\") " pod="kube-system/cilium-tlvwg" Feb 9 19:28:26.092698 env[1263]: time="2024-02-09T19:28:26.092654948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tlvwg,Uid:228b7947-547a-41c1-a9c7-774fb4433cc4,Namespace:kube-system,Attempt:0,}" Feb 9 19:28:26.120308 env[1263]: time="2024-02-09T19:28:26.120205142Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:28:26.120535 env[1263]: time="2024-02-09T19:28:26.120294242Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:28:26.120653 env[1263]: time="2024-02-09T19:28:26.120530543Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:28:26.120940 env[1263]: time="2024-02-09T19:28:26.120893745Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a pid=3331 runtime=io.containerd.runc.v2 Feb 9 19:28:26.133469 systemd[1]: Started cri-containerd-c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a.scope. Feb 9 19:28:26.157337 env[1263]: time="2024-02-09T19:28:26.157297669Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-tlvwg,Uid:228b7947-547a-41c1-a9c7-774fb4433cc4,Namespace:kube-system,Attempt:0,} returns sandbox id \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\"" Feb 9 19:28:26.160035 env[1263]: time="2024-02-09T19:28:26.159995478Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:28:26.188284 env[1263]: time="2024-02-09T19:28:26.188256475Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3\"" Feb 9 19:28:26.188828 env[1263]: time="2024-02-09T19:28:26.188777276Z" level=info msg="StartContainer for \"30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3\"" Feb 9 19:28:26.203364 systemd[1]: Started cri-containerd-30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3.scope. Feb 9 19:28:26.241882 env[1263]: time="2024-02-09T19:28:26.241824057Z" level=info msg="StartContainer for \"30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3\" returns successfully" Feb 9 19:28:26.244844 systemd[1]: cri-containerd-30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3.scope: Deactivated successfully. Feb 9 19:28:26.334537 kubelet[1745]: I0209 19:28:26.334502 1745 setters.go:552] "Node became not ready" node="10.200.8.10" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-09T19:28:26Z","lastTransitionTime":"2024-02-09T19:28:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 9 19:28:26.745286 kubelet[1745]: E0209 19:28:26.423389 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:26.758407 env[1263]: time="2024-02-09T19:28:26.758350821Z" level=info msg="shim disconnected" id=30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3 Feb 9 19:28:26.758407 env[1263]: time="2024-02-09T19:28:26.758405421Z" level=warning msg="cleaning up after shim disconnected" id=30dbb0cfb90ea2500a7b0feb7b0542d2ad9dabd1990d4183944feb51836e7dd3 namespace=k8s.io Feb 9 19:28:26.758848 env[1263]: time="2024-02-09T19:28:26.758416621Z" level=info msg="cleaning up dead shim" Feb 9 19:28:26.766203 env[1263]: time="2024-02-09T19:28:26.766169448Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:26Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3416 runtime=io.containerd.runc.v2\n" Feb 9 19:28:27.424360 kubelet[1745]: E0209 19:28:27.424297 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:27.521958 kubelet[1745]: I0209 19:28:27.521905 1745 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="015218a3-e107-4079-85a9-93c655ff544f" path="/var/lib/kubelet/pods/015218a3-e107-4079-85a9-93c655ff544f/volumes" Feb 9 19:28:27.702759 env[1263]: time="2024-02-09T19:28:27.702629735Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:28:27.732111 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788868402.mount: Deactivated successfully. Feb 9 19:28:27.746745 env[1263]: time="2024-02-09T19:28:27.746704484Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8\"" Feb 9 19:28:27.747213 env[1263]: time="2024-02-09T19:28:27.747178486Z" level=info msg="StartContainer for \"0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8\"" Feb 9 19:28:27.771663 systemd[1]: Started cri-containerd-0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8.scope. Feb 9 19:28:27.805335 systemd[1]: cri-containerd-0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8.scope: Deactivated successfully. Feb 9 19:28:27.806117 env[1263]: time="2024-02-09T19:28:27.806026986Z" level=info msg="StartContainer for \"0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8\" returns successfully" Feb 9 19:28:27.836783 env[1263]: time="2024-02-09T19:28:27.836724590Z" level=info msg="shim disconnected" id=0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8 Feb 9 19:28:27.836783 env[1263]: time="2024-02-09T19:28:27.836781290Z" level=warning msg="cleaning up after shim disconnected" id=0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8 namespace=k8s.io Feb 9 19:28:27.837068 env[1263]: time="2024-02-09T19:28:27.836795491Z" level=info msg="cleaning up dead shim" Feb 9 19:28:27.844603 env[1263]: time="2024-02-09T19:28:27.844565417Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:27Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3478 runtime=io.containerd.runc.v2\n" Feb 9 19:28:27.860563 systemd[1]: run-containerd-runc-k8s.io-0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8-runc.0wZDhH.mount: Deactivated successfully. Feb 9 19:28:27.860676 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0a675cf1771b128b1a319e632bfdb915f03b85ebb3430f8dfa7a775c765ed8f8-rootfs.mount: Deactivated successfully. Feb 9 19:28:28.425318 kubelet[1745]: E0209 19:28:28.425256 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:28.463272 kubelet[1745]: E0209 19:28:28.463244 1745 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:28:28.706377 env[1263]: time="2024-02-09T19:28:28.706258735Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:28:28.738930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount779835085.mount: Deactivated successfully. Feb 9 19:28:28.770611 env[1263]: time="2024-02-09T19:28:28.770561253Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313\"" Feb 9 19:28:28.771054 env[1263]: time="2024-02-09T19:28:28.771022654Z" level=info msg="StartContainer for \"349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313\"" Feb 9 19:28:28.791302 systemd[1]: Started cri-containerd-349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313.scope. Feb 9 19:28:28.825629 systemd[1]: cri-containerd-349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313.scope: Deactivated successfully. Feb 9 19:28:28.830089 env[1263]: time="2024-02-09T19:28:28.830052654Z" level=info msg="StartContainer for \"349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313\" returns successfully" Feb 9 19:28:28.860628 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313-rootfs.mount: Deactivated successfully. Feb 9 19:28:28.861520 env[1263]: time="2024-02-09T19:28:28.861345060Z" level=info msg="shim disconnected" id=349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313 Feb 9 19:28:28.861520 env[1263]: time="2024-02-09T19:28:28.861400960Z" level=warning msg="cleaning up after shim disconnected" id=349fbeaec5aa535f7c05409fef5f1f1459ac4cd6ab83182003295232bd38b313 namespace=k8s.io Feb 9 19:28:28.861520 env[1263]: time="2024-02-09T19:28:28.861412160Z" level=info msg="cleaning up dead shim" Feb 9 19:28:28.868748 env[1263]: time="2024-02-09T19:28:28.868714485Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3536 runtime=io.containerd.runc.v2\n" Feb 9 19:28:29.425660 kubelet[1745]: E0209 19:28:29.425557 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:29.711348 env[1263]: time="2024-02-09T19:28:29.711195225Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:28:29.740987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount134343661.mount: Deactivated successfully. Feb 9 19:28:29.756811 env[1263]: time="2024-02-09T19:28:29.756756479Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9\"" Feb 9 19:28:29.757416 env[1263]: time="2024-02-09T19:28:29.757325381Z" level=info msg="StartContainer for \"7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9\"" Feb 9 19:28:29.776601 systemd[1]: Started cri-containerd-7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9.scope. Feb 9 19:28:29.801862 systemd[1]: cri-containerd-7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9.scope: Deactivated successfully. Feb 9 19:28:29.804071 env[1263]: time="2024-02-09T19:28:29.803788737Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod228b7947_547a_41c1_a9c7_774fb4433cc4.slice/cri-containerd-7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9.scope/memory.events\": no such file or directory" Feb 9 19:28:29.808408 env[1263]: time="2024-02-09T19:28:29.808368353Z" level=info msg="StartContainer for \"7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9\" returns successfully" Feb 9 19:28:29.835156 env[1263]: time="2024-02-09T19:28:29.835109143Z" level=info msg="shim disconnected" id=7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9 Feb 9 19:28:29.835156 env[1263]: time="2024-02-09T19:28:29.835154743Z" level=warning msg="cleaning up after shim disconnected" id=7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9 namespace=k8s.io Feb 9 19:28:29.835662 env[1263]: time="2024-02-09T19:28:29.835165843Z" level=info msg="cleaning up dead shim" Feb 9 19:28:29.842657 env[1263]: time="2024-02-09T19:28:29.842619968Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:28:29Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3590 runtime=io.containerd.runc.v2\n" Feb 9 19:28:29.860666 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b693f8caa0d4193d120c27cc533b446a35590185f14bf3dd709a28c85278dc9-rootfs.mount: Deactivated successfully. Feb 9 19:28:30.426323 kubelet[1745]: E0209 19:28:30.426263 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:30.716088 env[1263]: time="2024-02-09T19:28:30.715961501Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:28:30.742826 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3645605680.mount: Deactivated successfully. Feb 9 19:28:30.757428 env[1263]: time="2024-02-09T19:28:30.757382339Z" level=info msg="CreateContainer within sandbox \"c08042dacffbdfa2e80289aa2160f6e03f3c80c542f70b170e87e2cd47c03c8a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"28a67f74c35d6c426fdea494c664ccbccd13cb82e508982b1dc69799090b270a\"" Feb 9 19:28:30.758160 env[1263]: time="2024-02-09T19:28:30.758123242Z" level=info msg="StartContainer for \"28a67f74c35d6c426fdea494c664ccbccd13cb82e508982b1dc69799090b270a\"" Feb 9 19:28:30.777342 systemd[1]: Started cri-containerd-28a67f74c35d6c426fdea494c664ccbccd13cb82e508982b1dc69799090b270a.scope. Feb 9 19:28:30.814703 env[1263]: time="2024-02-09T19:28:30.814645932Z" level=info msg="StartContainer for \"28a67f74c35d6c426fdea494c664ccbccd13cb82e508982b1dc69799090b270a\" returns successfully" Feb 9 19:28:31.164264 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:28:31.427183 kubelet[1745]: E0209 19:28:31.427043 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:31.747501 kubelet[1745]: I0209 19:28:31.747107 1745 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-tlvwg" podStartSLOduration=6.747067949 podCreationTimestamp="2024-02-09 19:28:25 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:28:31.746322347 +0000 UTC m=+79.001318653" watchObservedRunningTime="2024-02-09 19:28:31.747067949 +0000 UTC m=+79.002064155" Feb 9 19:28:32.427565 kubelet[1745]: E0209 19:28:32.427528 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:33.374319 kubelet[1745]: E0209 19:28:33.374277 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:33.401325 systemd[1]: run-containerd-runc-k8s.io-28a67f74c35d6c426fdea494c664ccbccd13cb82e508982b1dc69799090b270a-runc.AfOgXH.mount: Deactivated successfully. Feb 9 19:28:33.428126 kubelet[1745]: E0209 19:28:33.428078 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:33.640330 systemd-networkd[1377]: lxc_health: Link UP Feb 9 19:28:33.661521 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:28:33.663792 systemd-networkd[1377]: lxc_health: Gained carrier Feb 9 19:28:34.428665 kubelet[1745]: E0209 19:28:34.428612 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:35.342374 systemd-networkd[1377]: lxc_health: Gained IPv6LL Feb 9 19:28:35.429383 kubelet[1745]: E0209 19:28:35.429336 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:36.430372 kubelet[1745]: E0209 19:28:36.430330 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:37.431926 kubelet[1745]: E0209 19:28:37.431857 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:38.433257 kubelet[1745]: E0209 19:28:38.433197 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:39.434259 kubelet[1745]: E0209 19:28:39.434161 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:40.435242 kubelet[1745]: E0209 19:28:40.435163 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:41.435769 kubelet[1745]: E0209 19:28:41.435709 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:42.436581 kubelet[1745]: E0209 19:28:42.436517 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:43.437134 kubelet[1745]: E0209 19:28:43.437093 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:44.437483 kubelet[1745]: E0209 19:28:44.437420 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:45.438023 kubelet[1745]: E0209 19:28:45.437961 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:46.438633 kubelet[1745]: E0209 19:28:46.438569 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:47.439055 kubelet[1745]: E0209 19:28:47.438995 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:48.439521 kubelet[1745]: E0209 19:28:48.439485 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:49.439884 kubelet[1745]: E0209 19:28:49.439823 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:50.441035 kubelet[1745]: E0209 19:28:50.440976 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:51.441766 kubelet[1745]: E0209 19:28:51.441702 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:52.442220 kubelet[1745]: E0209 19:28:52.442162 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.374161 kubelet[1745]: E0209 19:28:53.374102 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:53.442613 kubelet[1745]: E0209 19:28:53.442554 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:54.443651 kubelet[1745]: E0209 19:28:54.443587 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:55.444059 kubelet[1745]: E0209 19:28:55.444004 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:56.445134 kubelet[1745]: E0209 19:28:56.445075 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:57.445363 kubelet[1745]: E0209 19:28:57.445302 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:58.446392 kubelet[1745]: E0209 19:28:58.446332 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:28:59.447152 kubelet[1745]: E0209 19:28:59.447089 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:00.447733 kubelet[1745]: E0209 19:29:00.447633 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.448326 kubelet[1745]: E0209 19:29:01.448264 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:01.753110 systemd[1]: cri-containerd-5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32.scope: Deactivated successfully. Feb 9 19:29:01.772736 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32-rootfs.mount: Deactivated successfully. Feb 9 19:29:01.798536 env[1263]: time="2024-02-09T19:29:01.798486466Z" level=info msg="shim disconnected" id=5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32 Feb 9 19:29:01.798536 env[1263]: time="2024-02-09T19:29:01.798534867Z" level=warning msg="cleaning up after shim disconnected" id=5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32 namespace=k8s.io Feb 9 19:29:01.799035 env[1263]: time="2024-02-09T19:29:01.798545767Z" level=info msg="cleaning up dead shim" Feb 9 19:29:01.806454 env[1263]: time="2024-02-09T19:29:01.806421304Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:29:01Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4277 runtime=io.containerd.runc.v2\n" Feb 9 19:29:02.449249 kubelet[1745]: E0209 19:29:02.449181 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:02.792806 kubelet[1745]: I0209 19:29:02.792450 1745 scope.go:117] "RemoveContainer" containerID="5f3cc97767830cf0cbb36a743a4d1d4bcf4d5024dc3614aef91d03321aac0f32" Feb 9 19:29:02.794433 env[1263]: time="2024-02-09T19:29:02.794390482Z" level=info msg="CreateContainer within sandbox \"8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 9 19:29:02.821370 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1740835828.mount: Deactivated successfully. Feb 9 19:29:02.828515 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4037329795.mount: Deactivated successfully. Feb 9 19:29:02.836717 env[1263]: time="2024-02-09T19:29:02.836675911Z" level=info msg="CreateContainer within sandbox \"8925a214c9ee3829569c6735e230ea44b2b2252764f5c5623cf699c2ee105363\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"b9942b4513d51a6cf6079afb303315660cc867a5adfe7e63021d2b7e8aa4ce4f\"" Feb 9 19:29:02.837213 env[1263]: time="2024-02-09T19:29:02.837181820Z" level=info msg="StartContainer for \"b9942b4513d51a6cf6079afb303315660cc867a5adfe7e63021d2b7e8aa4ce4f\"" Feb 9 19:29:02.856980 systemd[1]: Started cri-containerd-b9942b4513d51a6cf6079afb303315660cc867a5adfe7e63021d2b7e8aa4ce4f.scope. Feb 9 19:29:02.887155 env[1263]: time="2024-02-09T19:29:02.887054480Z" level=info msg="StartContainer for \"b9942b4513d51a6cf6079afb303315660cc867a5adfe7e63021d2b7e8aa4ce4f\" returns successfully" Feb 9 19:29:03.449862 kubelet[1745]: E0209 19:29:03.449801 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:04.450895 kubelet[1745]: E0209 19:29:04.450836 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:05.452111 kubelet[1745]: E0209 19:29:05.452012 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:06.452617 kubelet[1745]: E0209 19:29:06.452555 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:06.886935 kubelet[1745]: E0209 19:29:06.886849 1745 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:29:07.231322 kubelet[1745]: E0209 19:29:07.231099 1745 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": Get \"https://10.200.8.14:6443/api/v1/nodes/10.200.8.10?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:29:07.452981 kubelet[1745]: E0209 19:29:07.452918 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:08.453571 kubelet[1745]: E0209 19:29:08.453483 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:09.454444 kubelet[1745]: E0209 19:29:09.454325 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:10.455136 kubelet[1745]: E0209 19:29:10.455081 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:11.455798 kubelet[1745]: E0209 19:29:11.455741 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:12.456590 kubelet[1745]: E0209 19:29:12.456530 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.374315 kubelet[1745]: E0209 19:29:13.374259 1745 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:13.387310 env[1263]: time="2024-02-09T19:29:13.387263740Z" level=info msg="StopPodSandbox for \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\"" Feb 9 19:29:13.387773 env[1263]: time="2024-02-09T19:29:13.387375942Z" level=info msg="TearDown network for sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" successfully" Feb 9 19:29:13.387773 env[1263]: time="2024-02-09T19:29:13.387430642Z" level=info msg="StopPodSandbox for \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" returns successfully" Feb 9 19:29:13.388105 env[1263]: time="2024-02-09T19:29:13.388070252Z" level=info msg="RemovePodSandbox for \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\"" Feb 9 19:29:13.388218 env[1263]: time="2024-02-09T19:29:13.388107553Z" level=info msg="Forcibly stopping sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\"" Feb 9 19:29:13.388295 env[1263]: time="2024-02-09T19:29:13.388212754Z" level=info msg="TearDown network for sandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" successfully" Feb 9 19:29:13.400393 env[1263]: time="2024-02-09T19:29:13.400351441Z" level=info msg="RemovePodSandbox \"2f7724802ae4c0a5d54a999cb360c49e1f9462f9edb52980a1bd8bec53147995\" returns successfully" Feb 9 19:29:13.457120 kubelet[1745]: E0209 19:29:13.457077 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:14.458098 kubelet[1745]: E0209 19:29:14.458034 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:15.349270 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.363014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.376408 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.389342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.403263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.415539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.431672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.431834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.442840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.442980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.443087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.455036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.460695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.460841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.460971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.461084 kubelet[1745]: E0209 19:29:15.458149 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:15.477433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.483036 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.483154 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.488883 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.499872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.516151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.516457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.516645 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.516775 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.516901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.527246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.532900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.538457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.538674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.548656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.548927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.559293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.559518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.569896 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.570396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.581249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.581495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.592032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.597472 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.603033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.613589 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.613781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.619206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.626403 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.626537 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.635063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.645461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.645616 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.661191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.661367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.661484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.671730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.676799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.676915 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.682191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.682433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.692864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.703234 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.703383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.713761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.718994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.729135 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.729306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.729442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.729572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.744901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.749945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.760133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.765074 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.775054 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.780098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.780276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.780390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.787344 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.787485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.787619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.796242 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.806213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.806390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.816976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.827058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.832187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.842264 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.842440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.842571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.842735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.848136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.853371 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.858447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.863812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.874543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.880063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.890334 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.890457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.895879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.896015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.896144 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.911452 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.932211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.937790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.942899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948447 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.948966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.959293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.959480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.969258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.989292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.989439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.999687 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.999854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:15.999987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.000119 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.000259 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.010621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.015833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.026166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.036247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.041449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.041579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.051601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.051831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.051971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.052104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.062275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.077515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.092542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.102884 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103568 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.103690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.118369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.118570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.118714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.128564 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.128760 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.138711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.143910 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.149176 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.159790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.160078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.160233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.170150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.175296 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.185667 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.190834 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.206182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.211394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.211533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.211665 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.211801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.211932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.226592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.231816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.252555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.257918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.263440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.263577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.263710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.263836 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.263963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.264090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.274433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.285342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.290445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.306143 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.306355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.306491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.306629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.306759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.316612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.321765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.321891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.337627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.343166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.348362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.348511 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.353612 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.358695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.363732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.369072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.374374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.374631 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.384989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.390649 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.401071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.401262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.406531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.411596 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.411857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.422090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.422324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.438410 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.443626 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.454245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.459460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.464767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.470066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.475331 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.480547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.480676 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.480807 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.480936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.482020 kubelet[1745]: E0209 19:29:16.458351 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:16.497056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.497343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.497480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.507085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.517603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.533073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.533280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.533417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.533547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.533681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.549073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.554295 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.564473 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.564625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.575182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.580562 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.585990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.586153 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.586301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.586432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.596231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.606527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.606729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.606864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.617297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.617520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.623112 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.633677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.639097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.639248 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.655415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.665697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.670828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.691766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692006 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.692672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.708019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.708279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.723737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.723911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.724032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.735309 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.735492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.740668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.745850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.745990 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.761474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.766624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.777044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.787205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.792387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.792503 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.792610 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.797901 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.798121 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.798272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.813664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.818863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.823792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.839615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.844914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.845035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.850857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.851008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.851133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.851277 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.866489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.886954 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.892133 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.907745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.908924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.909051 kubelet[1745]: E0209 19:29:16.888519 1745 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.14:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.10?timeout=10s\": context deadline exceeded" Feb 9 19:29:16.918590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.938404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963268 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963722 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.963978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.964106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.964241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.964370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:16.978580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.010303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017233 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017758 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.017888 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.018011 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.035627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.063362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.073893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074189 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.074871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.089864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.115477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.115723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.115854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.115986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.116113 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.116276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.116412 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.125531 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.130973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.131346 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.141030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.151609 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.151754 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.156865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.177230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.177390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.177521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.177646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.177777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.192656 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.197728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.197856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.203058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.203263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.213215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.213461 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.223129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.228514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.228688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.231635 kubelet[1745]: E0209 19:29:17.231493 1745 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.10\": Get \"https://10.200.8.14:6443/api/v1/nodes/10.200.8.10?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:29:17.238876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.258934 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.269213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285603 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.285874 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.286000 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.286127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.300361 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.335785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.335967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.336919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.346100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.356320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.356473 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.371705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387460 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387723 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.387860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.392347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.408952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.419391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.424501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.429738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.434860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.445365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.450671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.450810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.455875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.461252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.466476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.466597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.466734 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.466865 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.466997 kubelet[1745]: E0209 19:29:17.459021 1745 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:29:17.482703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.488213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.488356 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.493486 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.509078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.524750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.530067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.540515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.540660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.540794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.540924 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.541049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.541157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.541301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.550762 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.550984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.560912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.581370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.591876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.592034 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.592165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.592326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.592453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.592601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.607798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.618409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.628582 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.628773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.639262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.639390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.646316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.646470 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.646598 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.646725 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.660210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.665250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.665417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.665559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.670276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.680213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.685430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.685571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.695695 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.701058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.701186 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.711964 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.717096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.727570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.727759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.736261 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.746413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.746536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.751826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.751973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.767744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.783320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.788730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.804347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.804492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.804623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.804755 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.804889 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.805019 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.805151 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.814819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.825026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.830367 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.840877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.841068 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.841200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.841339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.851002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.856446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.856584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.867129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.877441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.882552 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.882679 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.893249 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.893430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.898811 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.898947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.909262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.914827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.914968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.930855 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.931053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.946468 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.946614 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.962041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.962256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.962381 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.962492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.973297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.973636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.984484 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:17.989750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.010092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.020538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.025972 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026289 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026416 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.026806 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.041795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.047200 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.057284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.057431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.067914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.078097 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.078299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.078434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.078561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.078694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.088547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.098939 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.099081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.114366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.114513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.114625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.119978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.125415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.125719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.135621 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.135935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.146854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.152253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.157635 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.162839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.168029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.173660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.184003 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.184127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.191279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.191430 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.206158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.221930 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.222104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.222247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.222376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.222506 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.237963 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.243682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.243825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.243957 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.254478 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.275701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.286637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.298174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304629 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.304895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.305032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.315300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.325987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.331357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.331477 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.336567 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.341736 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.352195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.357825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.358106 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.358290 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.369173 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.374688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.380163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.390870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.391292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:29:18.391438 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#231 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001