Feb 9 19:04:20.036374 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Fri Feb 9 17:23:38 -00 2024 Feb 9 19:04:20.036413 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:20.036430 kernel: BIOS-provided physical RAM map: Feb 9 19:04:20.036440 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 9 19:04:20.036452 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 9 19:04:20.036463 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 9 19:04:20.036479 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 9 19:04:20.036492 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 9 19:04:20.036503 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 9 19:04:20.036514 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 9 19:04:20.036525 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 9 19:04:20.036536 kernel: printk: bootconsole [earlyser0] enabled Feb 9 19:04:20.036547 kernel: NX (Execute Disable) protection: active Feb 9 19:04:20.036559 kernel: efi: EFI v2.70 by Microsoft Feb 9 19:04:20.036577 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c9a98 RNG=0x3ffd1018 Feb 9 19:04:20.036590 kernel: random: crng init done Feb 9 19:04:20.036601 kernel: SMBIOS 3.1.0 present. Feb 9 19:04:20.036611 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 9 19:04:20.036621 kernel: Hypervisor detected: Microsoft Hyper-V Feb 9 19:04:20.036631 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 9 19:04:20.036641 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 9 19:04:20.036650 kernel: Hyper-V: Nested features: 0x1e0101 Feb 9 19:04:20.036664 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 9 19:04:20.036675 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 9 19:04:20.036686 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 9 19:04:20.036698 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 9 19:04:20.036709 kernel: tsc: Detected 2593.905 MHz processor Feb 9 19:04:20.036721 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 9 19:04:20.036733 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 9 19:04:20.036745 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 9 19:04:20.036757 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 9 19:04:20.036769 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 9 19:04:20.036784 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 9 19:04:20.036796 kernel: Using GB pages for direct mapping Feb 9 19:04:20.036808 kernel: Secure boot disabled Feb 9 19:04:20.036820 kernel: ACPI: Early table checksum verification disabled Feb 9 19:04:20.036832 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 9 19:04:20.036844 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036856 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036868 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 9 19:04:20.036887 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 9 19:04:20.036901 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036913 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036925 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036937 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036950 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036966 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036980 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 9 19:04:20.036992 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 9 19:04:20.037005 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 9 19:04:20.037019 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 9 19:04:20.037031 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 9 19:04:20.037044 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 9 19:04:20.037057 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 9 19:04:20.037073 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 9 19:04:20.037086 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 9 19:04:20.037099 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 9 19:04:20.037112 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 9 19:04:20.037124 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 9 19:04:20.037138 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 9 19:04:20.037151 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 9 19:04:20.037238 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 9 19:04:20.037253 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 9 19:04:20.037270 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 9 19:04:20.037283 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 9 19:04:20.037296 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 9 19:04:20.037309 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 9 19:04:20.037322 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 9 19:04:20.037335 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 9 19:04:20.037348 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 9 19:04:20.037361 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 9 19:04:20.037374 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 9 19:04:20.037389 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 9 19:04:20.037401 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 9 19:04:20.037414 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 9 19:04:20.037426 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 9 19:04:20.037440 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 9 19:04:20.037453 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 9 19:04:20.037467 kernel: Zone ranges: Feb 9 19:04:20.037479 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 9 19:04:20.037492 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 9 19:04:20.037508 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:20.037521 kernel: Movable zone start for each node Feb 9 19:04:20.037534 kernel: Early memory node ranges Feb 9 19:04:20.037547 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 9 19:04:20.037560 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 9 19:04:20.037572 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 9 19:04:20.037585 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 9 19:04:20.037598 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 9 19:04:20.037611 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 9 19:04:20.037626 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 9 19:04:20.037639 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 9 19:04:20.037653 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 9 19:04:20.037666 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 9 19:04:20.037678 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 9 19:04:20.037691 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 9 19:04:20.037704 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 9 19:04:20.037717 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 9 19:04:20.037730 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 9 19:04:20.037745 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 9 19:04:20.037758 kernel: Booting paravirtualized kernel on Hyper-V Feb 9 19:04:20.037771 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 9 19:04:20.037784 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 9 19:04:20.037798 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 9 19:04:20.037811 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 9 19:04:20.037823 kernel: pcpu-alloc: [0] 0 1 Feb 9 19:04:20.037835 kernel: Hyper-V: PV spinlocks enabled Feb 9 19:04:20.037848 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 9 19:04:20.037864 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 9 19:04:20.037877 kernel: Policy zone: Normal Feb 9 19:04:20.037892 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:20.037906 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 9 19:04:20.037919 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 9 19:04:20.037932 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 9 19:04:20.037945 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 9 19:04:20.037959 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 9 19:04:20.037974 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 9 19:04:20.037988 kernel: ftrace: allocating 34475 entries in 135 pages Feb 9 19:04:20.038010 kernel: ftrace: allocated 135 pages with 4 groups Feb 9 19:04:20.038027 kernel: rcu: Hierarchical RCU implementation. Feb 9 19:04:20.038041 kernel: rcu: RCU event tracing is enabled. Feb 9 19:04:20.038055 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 9 19:04:20.038068 kernel: Rude variant of Tasks RCU enabled. Feb 9 19:04:20.038083 kernel: Tracing variant of Tasks RCU enabled. Feb 9 19:04:20.038097 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 9 19:04:20.038110 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 9 19:04:20.038124 kernel: Using NULL legacy PIC Feb 9 19:04:20.038141 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 9 19:04:20.038155 kernel: Console: colour dummy device 80x25 Feb 9 19:04:20.038177 kernel: printk: console [tty1] enabled Feb 9 19:04:20.038189 kernel: printk: console [ttyS0] enabled Feb 9 19:04:20.038200 kernel: printk: bootconsole [earlyser0] disabled Feb 9 19:04:20.038215 kernel: ACPI: Core revision 20210730 Feb 9 19:04:20.038228 kernel: Failed to register legacy timer interrupt Feb 9 19:04:20.038242 kernel: APIC: Switch to symmetric I/O mode setup Feb 9 19:04:20.038256 kernel: Hyper-V: Using IPI hypercalls Feb 9 19:04:20.038270 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Feb 9 19:04:20.038283 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 9 19:04:20.038297 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 9 19:04:20.038310 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 9 19:04:20.038324 kernel: Spectre V2 : Mitigation: Retpolines Feb 9 19:04:20.038337 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 9 19:04:20.038353 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 9 19:04:20.038367 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 9 19:04:20.038380 kernel: RETBleed: Vulnerable Feb 9 19:04:20.038394 kernel: Speculative Store Bypass: Vulnerable Feb 9 19:04:20.038407 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:20.038420 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 9 19:04:20.038434 kernel: GDS: Unknown: Dependent on hypervisor status Feb 9 19:04:20.038447 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 9 19:04:20.038461 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 9 19:04:20.038475 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 9 19:04:20.038491 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 9 19:04:20.038505 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 9 19:04:20.038518 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 9 19:04:20.038532 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 9 19:04:20.038545 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 9 19:04:20.038558 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 9 19:04:20.038572 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 9 19:04:20.038586 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 9 19:04:20.038599 kernel: Freeing SMP alternatives memory: 32K Feb 9 19:04:20.038612 kernel: pid_max: default: 32768 minimum: 301 Feb 9 19:04:20.038625 kernel: LSM: Security Framework initializing Feb 9 19:04:20.038639 kernel: SELinux: Initializing. Feb 9 19:04:20.038655 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:20.038668 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 9 19:04:20.038682 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 9 19:04:20.038695 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 9 19:04:20.038709 kernel: signal: max sigframe size: 3632 Feb 9 19:04:20.038723 kernel: rcu: Hierarchical SRCU implementation. Feb 9 19:04:20.038737 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 9 19:04:20.038750 kernel: smp: Bringing up secondary CPUs ... Feb 9 19:04:20.038764 kernel: x86: Booting SMP configuration: Feb 9 19:04:20.038778 kernel: .... node #0, CPUs: #1 Feb 9 19:04:20.038795 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 9 19:04:20.038810 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 9 19:04:20.038824 kernel: smp: Brought up 1 node, 2 CPUs Feb 9 19:04:20.038838 kernel: smpboot: Max logical packages: 1 Feb 9 19:04:20.038852 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 9 19:04:20.038866 kernel: devtmpfs: initialized Feb 9 19:04:20.038880 kernel: x86/mm: Memory block size: 128MB Feb 9 19:04:20.038894 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 9 19:04:20.038910 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 9 19:04:20.038924 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 9 19:04:20.038938 kernel: pinctrl core: initialized pinctrl subsystem Feb 9 19:04:20.038951 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 9 19:04:20.038965 kernel: audit: initializing netlink subsys (disabled) Feb 9 19:04:20.038979 kernel: audit: type=2000 audit(1707505458.023:1): state=initialized audit_enabled=0 res=1 Feb 9 19:04:20.038992 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 9 19:04:20.039006 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 9 19:04:20.039019 kernel: cpuidle: using governor menu Feb 9 19:04:20.039036 kernel: ACPI: bus type PCI registered Feb 9 19:04:20.039049 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 9 19:04:20.039063 kernel: dca service started, version 1.12.1 Feb 9 19:04:20.039077 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 9 19:04:20.039090 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 9 19:04:20.039104 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 9 19:04:20.039118 kernel: ACPI: Added _OSI(Module Device) Feb 9 19:04:20.039132 kernel: ACPI: Added _OSI(Processor Device) Feb 9 19:04:20.039145 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 9 19:04:20.039187 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 9 19:04:20.039202 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 9 19:04:20.039215 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 9 19:04:20.039229 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 9 19:04:20.039243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 9 19:04:20.039257 kernel: ACPI: Interpreter enabled Feb 9 19:04:20.039270 kernel: ACPI: PM: (supports S0 S5) Feb 9 19:04:20.039284 kernel: ACPI: Using IOAPIC for interrupt routing Feb 9 19:04:20.039298 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 9 19:04:20.039315 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 9 19:04:20.039329 kernel: iommu: Default domain type: Translated Feb 9 19:04:20.039343 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 9 19:04:20.039357 kernel: vgaarb: loaded Feb 9 19:04:20.039370 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 9 19:04:20.039384 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 9 19:04:20.039397 kernel: PTP clock support registered Feb 9 19:04:20.039411 kernel: Registered efivars operations Feb 9 19:04:20.039424 kernel: PCI: Using ACPI for IRQ routing Feb 9 19:04:20.039438 kernel: PCI: System does not support PCI Feb 9 19:04:20.039454 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 9 19:04:20.039468 kernel: VFS: Disk quotas dquot_6.6.0 Feb 9 19:04:20.039481 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 9 19:04:20.039495 kernel: pnp: PnP ACPI init Feb 9 19:04:20.039509 kernel: pnp: PnP ACPI: found 3 devices Feb 9 19:04:20.039522 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 9 19:04:20.039536 kernel: NET: Registered PF_INET protocol family Feb 9 19:04:20.039549 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 9 19:04:20.039566 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 9 19:04:20.039579 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 9 19:04:20.039593 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 9 19:04:20.039607 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 9 19:04:20.039620 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 9 19:04:20.039634 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:20.039648 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 9 19:04:20.039661 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 9 19:04:20.039674 kernel: NET: Registered PF_XDP protocol family Feb 9 19:04:20.039690 kernel: PCI: CLS 0 bytes, default 64 Feb 9 19:04:20.039704 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 9 19:04:20.039718 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 9 19:04:20.039731 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 9 19:04:20.039745 kernel: Initialise system trusted keyrings Feb 9 19:04:20.039759 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 9 19:04:20.039772 kernel: Key type asymmetric registered Feb 9 19:04:20.039785 kernel: Asymmetric key parser 'x509' registered Feb 9 19:04:20.039799 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 9 19:04:20.039814 kernel: io scheduler mq-deadline registered Feb 9 19:04:20.039828 kernel: io scheduler kyber registered Feb 9 19:04:20.039841 kernel: io scheduler bfq registered Feb 9 19:04:20.039854 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 9 19:04:20.039869 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 9 19:04:20.039882 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 9 19:04:20.039896 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 9 19:04:20.039910 kernel: i8042: PNP: No PS/2 controller found. Feb 9 19:04:20.040075 kernel: rtc_cmos 00:02: registered as rtc0 Feb 9 19:04:20.040204 kernel: rtc_cmos 00:02: setting system clock to 2024-02-09T19:04:19 UTC (1707505459) Feb 9 19:04:20.040286 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 9 19:04:20.040296 kernel: fail to initialize ptp_kvm Feb 9 19:04:20.040304 kernel: intel_pstate: CPU model not supported Feb 9 19:04:20.040311 kernel: efifb: probing for efifb Feb 9 19:04:20.040320 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 9 19:04:20.040331 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 9 19:04:20.040340 kernel: efifb: scrolling: redraw Feb 9 19:04:20.040353 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 9 19:04:20.040361 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:04:20.040369 kernel: fb0: EFI VGA frame buffer device Feb 9 19:04:20.040379 kernel: pstore: Registered efi as persistent store backend Feb 9 19:04:20.040387 kernel: NET: Registered PF_INET6 protocol family Feb 9 19:04:20.040396 kernel: Segment Routing with IPv6 Feb 9 19:04:20.040404 kernel: In-situ OAM (IOAM) with IPv6 Feb 9 19:04:20.040414 kernel: NET: Registered PF_PACKET protocol family Feb 9 19:04:20.040421 kernel: Key type dns_resolver registered Feb 9 19:04:20.040434 kernel: IPI shorthand broadcast: enabled Feb 9 19:04:20.040443 kernel: sched_clock: Marking stable (755210000, 22094900)->(963894800, -186589900) Feb 9 19:04:20.040454 kernel: registered taskstats version 1 Feb 9 19:04:20.040461 kernel: Loading compiled-in X.509 certificates Feb 9 19:04:20.040468 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: 56154408a02b3bd349a9e9180c9bd837fd1d636a' Feb 9 19:04:20.040476 kernel: Key type .fscrypt registered Feb 9 19:04:20.040486 kernel: Key type fscrypt-provisioning registered Feb 9 19:04:20.040493 kernel: pstore: Using crash dump compression: deflate Feb 9 19:04:20.040505 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 9 19:04:20.040514 kernel: ima: Allocated hash algorithm: sha1 Feb 9 19:04:20.040522 kernel: ima: No architecture policies found Feb 9 19:04:20.040532 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 9 19:04:20.040539 kernel: Write protecting the kernel read-only data: 28672k Feb 9 19:04:20.040550 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 9 19:04:20.040557 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 9 19:04:20.040566 kernel: Run /init as init process Feb 9 19:04:20.040575 kernel: with arguments: Feb 9 19:04:20.040582 kernel: /init Feb 9 19:04:20.040594 kernel: with environment: Feb 9 19:04:20.040601 kernel: HOME=/ Feb 9 19:04:20.040611 kernel: TERM=linux Feb 9 19:04:20.040619 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 9 19:04:20.040633 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:04:20.040644 systemd[1]: Detected virtualization microsoft. Feb 9 19:04:20.040655 systemd[1]: Detected architecture x86-64. Feb 9 19:04:20.040665 systemd[1]: Running in initrd. Feb 9 19:04:20.040675 systemd[1]: No hostname configured, using default hostname. Feb 9 19:04:20.040682 systemd[1]: Hostname set to . Feb 9 19:04:20.040694 systemd[1]: Initializing machine ID from random generator. Feb 9 19:04:20.040703 systemd[1]: Queued start job for default target initrd.target. Feb 9 19:04:20.040713 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:04:20.040724 systemd[1]: Reached target cryptsetup.target. Feb 9 19:04:20.040734 systemd[1]: Reached target paths.target. Feb 9 19:04:20.040742 systemd[1]: Reached target slices.target. Feb 9 19:04:20.040755 systemd[1]: Reached target swap.target. Feb 9 19:04:20.040765 systemd[1]: Reached target timers.target. Feb 9 19:04:20.040773 systemd[1]: Listening on iscsid.socket. Feb 9 19:04:20.040784 systemd[1]: Listening on iscsiuio.socket. Feb 9 19:04:20.040791 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:04:20.040803 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:04:20.040811 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:04:20.040823 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:04:20.040831 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:04:20.040843 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:04:20.040850 systemd[1]: Reached target sockets.target. Feb 9 19:04:20.040861 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:04:20.040869 systemd[1]: Finished network-cleanup.service. Feb 9 19:04:20.040879 systemd[1]: Starting systemd-fsck-usr.service... Feb 9 19:04:20.040888 systemd[1]: Starting systemd-journald.service... Feb 9 19:04:20.040897 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:04:20.040909 systemd[1]: Starting systemd-resolved.service... Feb 9 19:04:20.040918 systemd[1]: Starting systemd-vconsole-setup.service... Feb 9 19:04:20.040927 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:04:20.040940 systemd-journald[183]: Journal started Feb 9 19:04:20.040989 systemd-journald[183]: Runtime Journal (/run/log/journal/8d50e5311e094144977ae17da794a45e) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:04:20.024543 systemd-modules-load[184]: Inserted module 'overlay' Feb 9 19:04:20.063309 kernel: audit: type=1130 audit(1707505460.047:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.063341 systemd[1]: Started systemd-journald.service. Feb 9 19:04:20.047000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.067000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.069214 systemd[1]: Finished systemd-fsck-usr.service. Feb 9 19:04:20.089427 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 9 19:04:20.089453 kernel: audit: type=1130 audit(1707505460.067:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.089664 systemd[1]: Finished systemd-vconsole-setup.service. Feb 9 19:04:20.099961 kernel: Bridge firewalling registered Feb 9 19:04:20.095809 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 9 19:04:20.103484 systemd[1]: Starting dracut-cmdline-ask.service... Feb 9 19:04:20.108270 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:04:20.127444 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:04:20.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.142680 systemd[1]: Finished dracut-cmdline-ask.service. Feb 9 19:04:20.152878 kernel: audit: type=1130 audit(1707505460.089:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.148210 systemd-resolved[185]: Positive Trust Anchors: Feb 9 19:04:20.195545 kernel: audit: type=1130 audit(1707505460.099:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.195583 kernel: audit: type=1130 audit(1707505460.131:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.195599 kernel: SCSI subsystem initialized Feb 9 19:04:20.195614 kernel: audit: type=1130 audit(1707505460.150:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.150000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.148220 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:04:20.148272 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:04:20.154053 systemd[1]: Starting dracut-cmdline.service... Feb 9 19:04:20.155552 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 9 19:04:20.232312 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 9 19:04:20.232374 kernel: device-mapper: uevent: version 1.0.3 Feb 9 19:04:20.249290 kernel: audit: type=1130 audit(1707505460.234:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.249357 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 9 19:04:20.234000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.232575 systemd[1]: Started systemd-resolved.service. Feb 9 19:04:20.256324 dracut-cmdline[200]: dracut-dracut-053 Feb 9 19:04:20.256324 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=4dbf910aaff679d18007a871aba359cc2cf6cb85992bb7598afad40271debbd6 Feb 9 19:04:20.234877 systemd[1]: Reached target nss-lookup.target. Feb 9 19:04:20.277732 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 9 19:04:20.280674 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:04:20.285893 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:20.302277 kernel: audit: type=1130 audit(1707505460.284:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.284000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.303240 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:20.305000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.319186 kernel: audit: type=1130 audit(1707505460.305:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.340192 kernel: Loading iSCSI transport class v2.0-870. Feb 9 19:04:20.353189 kernel: iscsi: registered transport (tcp) Feb 9 19:04:20.377712 kernel: iscsi: registered transport (qla4xxx) Feb 9 19:04:20.377787 kernel: QLogic iSCSI HBA Driver Feb 9 19:04:20.407468 systemd[1]: Finished dracut-cmdline.service. Feb 9 19:04:20.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.412606 systemd[1]: Starting dracut-pre-udev.service... Feb 9 19:04:20.462193 kernel: raid6: avx512x4 gen() 18455 MB/s Feb 9 19:04:20.483181 kernel: raid6: avx512x4 xor() 7181 MB/s Feb 9 19:04:20.502193 kernel: raid6: avx512x2 gen() 18254 MB/s Feb 9 19:04:20.522182 kernel: raid6: avx512x2 xor() 29795 MB/s Feb 9 19:04:20.542176 kernel: raid6: avx512x1 gen() 18398 MB/s Feb 9 19:04:20.562177 kernel: raid6: avx512x1 xor() 26894 MB/s Feb 9 19:04:20.582176 kernel: raid6: avx2x4 gen() 18306 MB/s Feb 9 19:04:20.602176 kernel: raid6: avx2x4 xor() 6904 MB/s Feb 9 19:04:20.621176 kernel: raid6: avx2x2 gen() 18360 MB/s Feb 9 19:04:20.642178 kernel: raid6: avx2x2 xor() 22023 MB/s Feb 9 19:04:20.662175 kernel: raid6: avx2x1 gen() 13831 MB/s Feb 9 19:04:20.682174 kernel: raid6: avx2x1 xor() 19250 MB/s Feb 9 19:04:20.703176 kernel: raid6: sse2x4 gen() 11746 MB/s Feb 9 19:04:20.722174 kernel: raid6: sse2x4 xor() 6453 MB/s Feb 9 19:04:20.742174 kernel: raid6: sse2x2 gen() 12770 MB/s Feb 9 19:04:20.762175 kernel: raid6: sse2x2 xor() 7535 MB/s Feb 9 19:04:20.782173 kernel: raid6: sse2x1 gen() 11518 MB/s Feb 9 19:04:20.804340 kernel: raid6: sse2x1 xor() 5926 MB/s Feb 9 19:04:20.804369 kernel: raid6: using algorithm avx512x4 gen() 18455 MB/s Feb 9 19:04:20.804382 kernel: raid6: .... xor() 7181 MB/s, rmw enabled Feb 9 19:04:20.807260 kernel: raid6: using avx512x2 recovery algorithm Feb 9 19:04:20.826191 kernel: xor: automatically using best checksumming function avx Feb 9 19:04:20.922196 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 9 19:04:20.930668 systemd[1]: Finished dracut-pre-udev.service. Feb 9 19:04:20.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.934000 audit: BPF prog-id=7 op=LOAD Feb 9 19:04:20.934000 audit: BPF prog-id=8 op=LOAD Feb 9 19:04:20.935501 systemd[1]: Starting systemd-udevd.service... Feb 9 19:04:20.949915 systemd-udevd[382]: Using default interface naming scheme 'v252'. Feb 9 19:04:20.954604 systemd[1]: Started systemd-udevd.service. Feb 9 19:04:20.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:20.974904 systemd[1]: Starting dracut-pre-trigger.service... Feb 9 19:04:20.985711 dracut-pre-trigger[403]: rd.md=0: removing MD RAID activation Feb 9 19:04:21.017839 systemd[1]: Finished dracut-pre-trigger.service. Feb 9 19:04:21.021000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:21.022427 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:04:21.058259 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:04:21.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:21.104184 kernel: cryptd: max_cpu_qlen set to 1000 Feb 9 19:04:21.142188 kernel: AVX2 version of gcm_enc/dec engaged. Feb 9 19:04:21.142249 kernel: AES CTR mode by8 optimization enabled Feb 9 19:04:21.145183 kernel: hv_vmbus: Vmbus version:5.2 Feb 9 19:04:21.165181 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 9 19:04:21.186664 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 9 19:04:21.186728 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 9 19:04:21.196216 kernel: hv_vmbus: registering driver hv_netvsc Feb 9 19:04:21.196273 kernel: hv_vmbus: registering driver hv_storvsc Feb 9 19:04:21.203806 kernel: hv_vmbus: registering driver hid_hyperv Feb 9 19:04:21.203864 kernel: scsi host0: storvsc_host_t Feb 9 19:04:21.215896 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 9 19:04:21.215974 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 9 19:04:21.215996 kernel: scsi host1: storvsc_host_t Feb 9 19:04:21.224106 kernel: hid-generic 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 9 19:04:21.224300 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 9 19:04:21.250225 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 9 19:04:21.250462 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 9 19:04:21.255155 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 9 19:04:21.255343 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 9 19:04:21.255453 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 9 19:04:21.258550 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 9 19:04:21.265578 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 9 19:04:21.265749 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 9 19:04:21.270176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:21.276224 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 9 19:04:21.378193 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (443) Feb 9 19:04:21.379601 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 9 19:04:21.405189 kernel: hv_netvsc 002248a1-87d9-0022-48a1-87d9002248a1 eth0: VF slot 1 added Feb 9 19:04:21.421185 kernel: hv_vmbus: registering driver hv_pci Feb 9 19:04:21.430082 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 9 19:04:21.438300 kernel: hv_pci 20fed664-bdd4-48fa-8bc1-baa12e7467bf: PCI VMBus probing: Using version 0x10004 Feb 9 19:04:21.453249 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 9 19:04:21.458466 kernel: hv_pci 20fed664-bdd4-48fa-8bc1-baa12e7467bf: PCI host bridge to bus bdd4:00 Feb 9 19:04:21.458698 kernel: pci_bus bdd4:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 9 19:04:21.458902 kernel: pci_bus bdd4:00: No busn resource found for root bus, will use [bus 00-ff] Feb 9 19:04:21.467026 kernel: pci bdd4:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 9 19:04:21.467003 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 9 19:04:21.472774 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 9 19:04:21.478307 systemd[1]: Starting disk-uuid.service... Feb 9 19:04:21.490188 kernel: pci bdd4:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:21.501177 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:21.506181 kernel: pci bdd4:00:02.0: enabling Extended Tags Feb 9 19:04:21.510178 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:21.520181 kernel: pci bdd4:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at bdd4:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 9 19:04:21.531559 kernel: pci_bus bdd4:00: busn_res: [bus 00-ff] end is updated to 00 Feb 9 19:04:21.531736 kernel: pci bdd4:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 9 19:04:21.678193 kernel: mlx5_core bdd4:00:02.0: firmware version: 14.30.1350 Feb 9 19:04:21.838193 kernel: mlx5_core bdd4:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 9 19:04:21.977839 kernel: mlx5_core bdd4:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 9 19:04:21.978113 kernel: mlx5_core bdd4:00:02.0: mlx5e_tc_post_act_init:40:(pid 491): firmware level support is missing Feb 9 19:04:21.989368 kernel: hv_netvsc 002248a1-87d9-0022-48a1-87d9002248a1 eth0: VF registering: eth1 Feb 9 19:04:21.989546 kernel: mlx5_core bdd4:00:02.0 eth1: joined to eth0 Feb 9 19:04:22.001224 kernel: mlx5_core bdd4:00:02.0 enP48596s1: renamed from eth1 Feb 9 19:04:22.511198 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 9 19:04:22.511271 disk-uuid[556]: The operation has completed successfully. Feb 9 19:04:22.596296 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 9 19:04:22.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.600000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.596407 systemd[1]: Finished disk-uuid.service. Feb 9 19:04:22.607199 systemd[1]: Starting verity-setup.service... Feb 9 19:04:22.632188 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 9 19:04:22.718928 systemd[1]: Found device dev-mapper-usr.device. Feb 9 19:04:22.724066 systemd[1]: Mounting sysusr-usr.mount... Feb 9 19:04:22.728000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.726338 systemd[1]: Finished verity-setup.service. Feb 9 19:04:22.799200 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 9 19:04:22.799593 systemd[1]: Mounted sysusr-usr.mount. Feb 9 19:04:22.803247 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 9 19:04:22.807138 systemd[1]: Starting ignition-setup.service... Feb 9 19:04:22.811173 systemd[1]: Starting parse-ip-for-networkd.service... Feb 9 19:04:22.834620 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:22.834654 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:22.834678 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:22.865639 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 9 19:04:22.886525 systemd[1]: Finished parse-ip-for-networkd.service. Feb 9 19:04:22.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.891000 audit: BPF prog-id=9 op=LOAD Feb 9 19:04:22.892482 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:22.902505 systemd[1]: Finished ignition-setup.service. Feb 9 19:04:22.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.912743 systemd[1]: Starting ignition-fetch-offline.service... Feb 9 19:04:22.927327 systemd-networkd[811]: lo: Link UP Feb 9 19:04:22.927335 systemd-networkd[811]: lo: Gained carrier Feb 9 19:04:22.931060 systemd-networkd[811]: Enumeration completed Feb 9 19:04:22.931151 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:22.936592 systemd[1]: Reached target network.target. Feb 9 19:04:22.936887 systemd-networkd[811]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:22.934000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.943805 systemd[1]: Starting iscsiuio.service... Feb 9 19:04:22.952207 systemd[1]: Started iscsiuio.service. Feb 9 19:04:22.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.955623 systemd[1]: Starting iscsid.service... Feb 9 19:04:22.960872 iscsid[818]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:22.960872 iscsid[818]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 9 19:04:22.960872 iscsid[818]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 9 19:04:22.960872 iscsid[818]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 9 19:04:22.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.984382 iscsid[818]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 9 19:04:22.984382 iscsid[818]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 9 19:04:22.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:22.962453 systemd[1]: Started iscsid.service. Feb 9 19:04:22.966849 systemd[1]: Starting dracut-initqueue.service... Feb 9 19:04:22.984426 systemd[1]: Finished dracut-initqueue.service. Feb 9 19:04:22.989421 systemd[1]: Reached target remote-fs-pre.target. Feb 9 19:04:22.993939 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:04:22.995901 systemd[1]: Reached target remote-fs.target. Feb 9 19:04:22.998350 systemd[1]: Starting dracut-pre-mount.service... Feb 9 19:04:23.016248 systemd[1]: Finished dracut-pre-mount.service. Feb 9 19:04:23.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.029206 kernel: mlx5_core bdd4:00:02.0 enP48596s1: Link up Feb 9 19:04:23.105196 kernel: hv_netvsc 002248a1-87d9-0022-48a1-87d9002248a1 eth0: Data path switched to VF: enP48596s1 Feb 9 19:04:23.106617 systemd-networkd[811]: enP48596s1: Link UP Feb 9 19:04:23.112739 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:04:23.106723 systemd-networkd[811]: eth0: Link UP Feb 9 19:04:23.111574 systemd-networkd[811]: eth0: Gained carrier Feb 9 19:04:23.118384 systemd-networkd[811]: enP48596s1: Gained carrier Feb 9 19:04:23.134245 systemd-networkd[811]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:23.611221 ignition[813]: Ignition 2.14.0 Feb 9 19:04:23.611236 ignition[813]: Stage: fetch-offline Feb 9 19:04:23.611319 ignition[813]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:23.611363 ignition[813]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:23.649482 ignition[813]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:23.652277 ignition[813]: parsed url from cmdline: "" Feb 9 19:04:23.652284 ignition[813]: no config URL provided Feb 9 19:04:23.652293 ignition[813]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:23.652308 ignition[813]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:23.652318 ignition[813]: failed to fetch config: resource requires networking Feb 9 19:04:23.654148 ignition[813]: Ignition finished successfully Feb 9 19:04:23.663122 systemd[1]: Finished ignition-fetch-offline.service. Feb 9 19:04:23.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.666277 systemd[1]: Starting ignition-fetch.service... Feb 9 19:04:23.676277 ignition[837]: Ignition 2.14.0 Feb 9 19:04:23.676287 ignition[837]: Stage: fetch Feb 9 19:04:23.676428 ignition[837]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:23.676461 ignition[837]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:23.681906 ignition[837]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:23.682076 ignition[837]: parsed url from cmdline: "" Feb 9 19:04:23.682081 ignition[837]: no config URL provided Feb 9 19:04:23.682087 ignition[837]: reading system config file "/usr/lib/ignition/user.ign" Feb 9 19:04:23.682096 ignition[837]: no config at "/usr/lib/ignition/user.ign" Feb 9 19:04:23.682129 ignition[837]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 9 19:04:23.780558 ignition[837]: GET result: OK Feb 9 19:04:23.780711 ignition[837]: config has been read from IMDS userdata Feb 9 19:04:23.780753 ignition[837]: parsing config with SHA512: e6a22183ddf7b9c905c84caa0b0a83a7443d151308388c69eba46da6fc514b14fa061ada902ab92088bae5e525ca222091088d836431cd667a9ddab57d8edb54 Feb 9 19:04:23.799471 unknown[837]: fetched base config from "system" Feb 9 19:04:23.801404 unknown[837]: fetched base config from "system" Feb 9 19:04:23.801413 unknown[837]: fetched user config from "azure" Feb 9 19:04:23.811470 ignition[837]: fetch: fetch complete Feb 9 19:04:23.811485 ignition[837]: fetch: fetch passed Feb 9 19:04:23.811556 ignition[837]: Ignition finished successfully Feb 9 19:04:23.815639 systemd[1]: Finished ignition-fetch.service. Feb 9 19:04:23.818000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.819824 systemd[1]: Starting ignition-kargs.service... Feb 9 19:04:23.831444 ignition[843]: Ignition 2.14.0 Feb 9 19:04:23.831455 ignition[843]: Stage: kargs Feb 9 19:04:23.831585 ignition[843]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:23.831611 ignition[843]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:23.840054 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:23.844407 ignition[843]: kargs: kargs passed Feb 9 19:04:23.844469 ignition[843]: Ignition finished successfully Feb 9 19:04:23.848002 systemd[1]: Finished ignition-kargs.service. Feb 9 19:04:23.851000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.852657 systemd[1]: Starting ignition-disks.service... Feb 9 19:04:23.860879 ignition[849]: Ignition 2.14.0 Feb 9 19:04:23.860890 ignition[849]: Stage: disks Feb 9 19:04:23.861021 ignition[849]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:23.861055 ignition[849]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:23.868214 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:23.869438 ignition[849]: disks: disks passed Feb 9 19:04:23.872000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.870819 systemd[1]: Finished ignition-disks.service. Feb 9 19:04:23.869485 ignition[849]: Ignition finished successfully Feb 9 19:04:23.873160 systemd[1]: Reached target initrd-root-device.target. Feb 9 19:04:23.876599 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:04:23.878579 systemd[1]: Reached target local-fs.target. Feb 9 19:04:23.880482 systemd[1]: Reached target sysinit.target. Feb 9 19:04:23.882344 systemd[1]: Reached target basic.target. Feb 9 19:04:23.886780 systemd[1]: Starting systemd-fsck-root.service... Feb 9 19:04:23.920048 systemd-fsck[857]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 9 19:04:23.924641 systemd[1]: Finished systemd-fsck-root.service. Feb 9 19:04:23.928000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:23.929895 systemd[1]: Mounting sysroot.mount... Feb 9 19:04:23.944176 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 9 19:04:23.945054 systemd[1]: Mounted sysroot.mount. Feb 9 19:04:23.946982 systemd[1]: Reached target initrd-root-fs.target. Feb 9 19:04:23.970318 systemd[1]: Mounting sysroot-usr.mount... Feb 9 19:04:23.972628 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 9 19:04:23.978192 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 9 19:04:23.978234 systemd[1]: Reached target ignition-diskful.target. Feb 9 19:04:23.987351 systemd[1]: Mounted sysroot-usr.mount. Feb 9 19:04:23.998617 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:24.003646 systemd[1]: Starting initrd-setup-root.service... Feb 9 19:04:24.016421 initrd-setup-root[872]: cut: /sysroot/etc/passwd: No such file or directory Feb 9 19:04:24.022920 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (867) Feb 9 19:04:24.022965 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:24.026824 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:24.030286 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:24.033255 initrd-setup-root[896]: cut: /sysroot/etc/group: No such file or directory Feb 9 19:04:24.038613 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:24.044723 initrd-setup-root[906]: cut: /sysroot/etc/shadow: No such file or directory Feb 9 19:04:24.050963 initrd-setup-root[914]: cut: /sysroot/etc/gshadow: No such file or directory Feb 9 19:04:24.153785 systemd[1]: Finished initrd-setup-root.service. Feb 9 19:04:24.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.159066 systemd[1]: Starting ignition-mount.service... Feb 9 19:04:24.165960 systemd[1]: Starting sysroot-boot.service... Feb 9 19:04:24.170699 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:24.171352 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 9 19:04:24.192908 ignition[934]: INFO : Ignition 2.14.0 Feb 9 19:04:24.192908 ignition[934]: INFO : Stage: mount Feb 9 19:04:24.196673 ignition[934]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:24.196673 ignition[934]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:24.207000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.208188 ignition[934]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:24.208188 ignition[934]: INFO : mount: mount passed Feb 9 19:04:24.208188 ignition[934]: INFO : Ignition finished successfully Feb 9 19:04:24.219000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.204373 systemd[1]: Finished ignition-mount.service. Feb 9 19:04:24.236775 kernel: kauditd_printk_skb: 25 callbacks suppressed Feb 9 19:04:24.236799 kernel: audit: type=1130 audit(1707505464.219:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.215243 systemd[1]: Finished sysroot-boot.service. Feb 9 19:04:24.391666 coreos-metadata[866]: Feb 09 19:04:24.391 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:04:24.396557 coreos-metadata[866]: Feb 09 19:04:24.396 INFO Fetch successful Feb 9 19:04:24.431648 coreos-metadata[866]: Feb 09 19:04:24.431 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:04:24.451159 coreos-metadata[866]: Feb 09 19:04:24.451 INFO Fetch successful Feb 9 19:04:24.458827 coreos-metadata[866]: Feb 09 19:04:24.458 INFO wrote hostname ci-3510.3.2-a-92fe98b439 to /sysroot/etc/hostname Feb 9 19:04:24.464000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.460740 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 9 19:04:24.482540 kernel: audit: type=1130 audit(1707505464.464:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:24.466136 systemd[1]: Starting ignition-files.service... Feb 9 19:04:24.477582 systemd-networkd[811]: eth0: Gained IPv6LL Feb 9 19:04:24.490130 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 9 19:04:24.505189 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (945) Feb 9 19:04:24.505238 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 9 19:04:24.513223 kernel: BTRFS info (device sda6): using free space tree Feb 9 19:04:24.513251 kernel: BTRFS info (device sda6): has skinny extents Feb 9 19:04:24.520636 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 9 19:04:24.534620 ignition[964]: INFO : Ignition 2.14.0 Feb 9 19:04:24.536422 ignition[964]: INFO : Stage: files Feb 9 19:04:24.536422 ignition[964]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:24.536422 ignition[964]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:24.547318 ignition[964]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:24.551025 ignition[964]: DEBUG : files: compiled without relabeling support, skipping Feb 9 19:04:24.551025 ignition[964]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 9 19:04:24.551025 ignition[964]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 9 19:04:24.564338 ignition[964]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 9 19:04:24.567631 ignition[964]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 9 19:04:24.574367 unknown[964]: wrote ssh authorized keys file for user: core Feb 9 19:04:24.576547 ignition[964]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 9 19:04:24.579953 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:04:24.583794 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Feb 9 19:04:24.587746 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:24.592336 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/containernetworking/plugins/releases/download/v1.1.1/cni-plugins-linux-amd64-v1.1.1.tgz: attempt #1 Feb 9 19:04:25.236771 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 9 19:04:25.425263 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: 4d0ed0abb5951b9cf83cba938ef84bdc5b681f4ac869da8143974f6a53a3ff30c666389fa462b9d14d30af09bf03f6cdf77598c572f8fb3ea00cecdda467a48d Feb 9 19:04:25.432347 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.1.1.tgz" Feb 9 19:04:25.432347 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:25.432347 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.26.0/crictl-v1.26.0-linux-amd64.tar.gz: attempt #1 Feb 9 19:04:25.937010 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 9 19:04:26.020614 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: a3a2c02a90b008686c20babaf272e703924db2a3e2a0d4e2a7c81d994cbc68c47458a4a354ecc243af095b390815c7f203348b9749351ae817bd52a522300449 Feb 9 19:04:26.027101 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/crictl-v1.26.0-linux-amd64.tar.gz" Feb 9 19:04:26.027101 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:26.035479 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubeadm: attempt #1 Feb 9 19:04:26.245812 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 9 19:04:26.550149 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: 1c324cd645a7bf93d19d24c87498d9a17878eb1cc927e2680200ffeab2f85051ddec47d85b79b8e774042dc6726299ad3d7caf52c060701f00deba30dc33f660 Feb 9 19:04:26.558304 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 9 19:04:26.558304 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:04:26.558304 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET https://dl.k8s.io/release/v1.26.5/bin/linux/amd64/kubelet: attempt #1 Feb 9 19:04:26.678321 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): GET result: OK Feb 9 19:04:27.363945 ignition[964]: DEBUG : files: createFilesystemsFiles: createFiles: op(7): file matches expected sum of: 40daf2a9b9e666c14b10e627da931bd79978628b1f23ef6429c1cb4fcba261f86ccff440c0dbb0070ee760fe55772b4fd279c4582dfbb17fa30bc94b7f00126b Feb 9 19:04:27.370922 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 9 19:04:27.370922 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/install.sh" Feb 9 19:04:27.370922 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/install.sh" Feb 9 19:04:27.370922 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:04:27.370922 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 9 19:04:27.393806 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:04:27.393806 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 9 19:04:27.401921 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:04:27.406147 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:04:27.413855 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem854217550" Feb 9 19:04:27.423365 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (969) Feb 9 19:04:27.423397 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(b): op(c): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem854217550": device or resource busy Feb 9 19:04:27.423397 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(b): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem854217550", trying btrfs: device or resource busy Feb 9 19:04:27.423397 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem854217550" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(d): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem854217550" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [started] unmounting "/mnt/oem854217550" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): op(e): [finished] unmounting "/mnt/oem854217550" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:04:27.438797 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): oem config not found in "/usr/share/oem", looking on oem partition Feb 9 19:04:27.430642 systemd[1]: mnt-oem854217550.mount: Deactivated successfully. Feb 9 19:04:27.499187 kernel: audit: type=1130 audit(1707505467.466:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.466000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666925259" Feb 9 19:04:27.499277 ignition[964]: CRITICAL : files: createFilesystemsFiles: createFiles: op(f): op(10): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666925259": device or resource busy Feb 9 19:04:27.499277 ignition[964]: ERROR : files: createFilesystemsFiles: createFiles: op(f): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem1666925259", trying btrfs: device or resource busy Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666925259" Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(11): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem1666925259" Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [started] unmounting "/mnt/oem1666925259" Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): op(12): [finished] unmounting "/mnt/oem1666925259" Feb 9 19:04:27.499277 ignition[964]: INFO : files: createFilesystemsFiles: createFiles: op(f): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(13): [started] processing unit "waagent.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(13): [finished] processing unit "waagent.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(14): [started] processing unit "nvidia.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(14): [finished] processing unit "nvidia.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(15): [started] processing unit "containerd.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(15): op(16): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(15): op(16): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(15): [finished] processing unit "containerd.service" Feb 9 19:04:27.499277 ignition[964]: INFO : files: op(17): [started] processing unit "prepare-cni-plugins.service" Feb 9 19:04:27.455522 systemd[1]: mnt-oem1666925259.mount: Deactivated successfully. Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(17): op(18): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(17): op(18): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(17): [finished] processing unit "prepare-cni-plugins.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(19): [started] processing unit "prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(19): op(1a): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(19): op(1a): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(19): [finished] processing unit "prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1b): [started] setting preset to enabled for "waagent.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1b): [finished] setting preset to enabled for "waagent.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1c): [started] setting preset to enabled for "nvidia.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1c): [finished] setting preset to enabled for "nvidia.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1d): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1d): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1e): [started] setting preset to enabled for "prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: op(1e): [finished] setting preset to enabled for "prepare-critools.service" Feb 9 19:04:27.537859 ignition[964]: INFO : files: createResultFile: createFiles: op(1f): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:04:27.537859 ignition[964]: INFO : files: createResultFile: createFiles: op(1f): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 9 19:04:27.537859 ignition[964]: INFO : files: files passed Feb 9 19:04:27.537859 ignition[964]: INFO : Ignition finished successfully Feb 9 19:04:27.462344 systemd[1]: Finished ignition-files.service. Feb 9 19:04:27.546024 initrd-setup-root-after-ignition[990]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 9 19:04:27.484287 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 9 19:04:27.502431 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 9 19:04:27.504263 systemd[1]: Starting ignition-quench.service... Feb 9 19:04:27.530296 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 9 19:04:27.530412 systemd[1]: Finished ignition-quench.service. Feb 9 19:04:27.652000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.653585 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 9 19:04:27.688829 kernel: audit: type=1130 audit(1707505467.652:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.688877 kernel: audit: type=1131 audit(1707505467.653:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.688898 kernel: audit: type=1130 audit(1707505467.674:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.653000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.674000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.675328 systemd[1]: Reached target ignition-complete.target. Feb 9 19:04:27.692082 systemd[1]: Starting initrd-parse-etc.service... Feb 9 19:04:27.708040 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 9 19:04:27.708160 systemd[1]: Finished initrd-parse-etc.service. Feb 9 19:04:27.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.715832 systemd[1]: Reached target initrd-fs.target. Feb 9 19:04:27.741380 kernel: audit: type=1130 audit(1707505467.715:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.741418 kernel: audit: type=1131 audit(1707505467.715:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.715000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.737815 systemd[1]: Reached target initrd.target. Feb 9 19:04:27.741380 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 9 19:04:27.743106 systemd[1]: Starting dracut-pre-pivot.service... Feb 9 19:04:27.756448 systemd[1]: Finished dracut-pre-pivot.service. Feb 9 19:04:27.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.770122 systemd[1]: Starting initrd-cleanup.service... Feb 9 19:04:27.772392 kernel: audit: type=1130 audit(1707505467.758:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.780512 systemd[1]: Stopped target nss-lookup.target. Feb 9 19:04:27.784198 systemd[1]: Stopped target remote-cryptsetup.target. Feb 9 19:04:27.786341 systemd[1]: Stopped target timers.target. Feb 9 19:04:27.789954 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 9 19:04:27.793000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.790066 systemd[1]: Stopped dracut-pre-pivot.service. Feb 9 19:04:27.808748 kernel: audit: type=1131 audit(1707505467.793:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.805329 systemd[1]: Stopped target initrd.target. Feb 9 19:04:27.808877 systemd[1]: Stopped target basic.target. Feb 9 19:04:27.812077 systemd[1]: Stopped target ignition-complete.target. Feb 9 19:04:27.815462 systemd[1]: Stopped target ignition-diskful.target. Feb 9 19:04:27.821117 systemd[1]: Stopped target initrd-root-device.target. Feb 9 19:04:27.825132 systemd[1]: Stopped target remote-fs.target. Feb 9 19:04:27.829013 systemd[1]: Stopped target remote-fs-pre.target. Feb 9 19:04:27.832585 systemd[1]: Stopped target sysinit.target. Feb 9 19:04:27.836344 systemd[1]: Stopped target local-fs.target. Feb 9 19:04:27.839792 systemd[1]: Stopped target local-fs-pre.target. Feb 9 19:04:27.843297 systemd[1]: Stopped target swap.target. Feb 9 19:04:27.846673 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 9 19:04:27.849000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.846829 systemd[1]: Stopped dracut-pre-mount.service. Feb 9 19:04:27.850039 systemd[1]: Stopped target cryptsetup.target. Feb 9 19:04:27.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.853969 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 9 19:04:27.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.854122 systemd[1]: Stopped dracut-initqueue.service. Feb 9 19:04:27.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.857587 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 9 19:04:27.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.857718 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 9 19:04:27.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.884000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.861273 systemd[1]: ignition-files.service: Deactivated successfully. Feb 9 19:04:27.861401 systemd[1]: Stopped ignition-files.service. Feb 9 19:04:27.865060 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 9 19:04:27.895945 ignition[1003]: INFO : Ignition 2.14.0 Feb 9 19:04:27.895945 ignition[1003]: INFO : Stage: umount Feb 9 19:04:27.895945 ignition[1003]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 9 19:04:27.895945 ignition[1003]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 9 19:04:27.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.908000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.908000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.915000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.865207 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 9 19:04:27.919000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.919900 ignition[1003]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 9 19:04:27.919900 ignition[1003]: INFO : umount: umount passed Feb 9 19:04:27.919900 ignition[1003]: INFO : Ignition finished successfully Feb 9 19:04:27.924000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.928000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.869832 systemd[1]: Stopping ignition-mount.service... Feb 9 19:04:27.937000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:27.873448 systemd[1]: Stopping iscsiuio.service... Feb 9 19:04:27.876043 systemd[1]: Stopping sysroot-boot.service... Feb 9 19:04:27.877931 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 9 19:04:27.878110 systemd[1]: Stopped systemd-udev-trigger.service. Feb 9 19:04:27.880422 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 9 19:04:27.880572 systemd[1]: Stopped dracut-pre-trigger.service. Feb 9 19:04:27.886911 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 9 19:04:27.887017 systemd[1]: Stopped iscsiuio.service. Feb 9 19:04:27.900620 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 9 19:04:27.900719 systemd[1]: Finished initrd-cleanup.service. Feb 9 19:04:27.911371 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 9 19:04:27.911793 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 9 19:04:27.911873 systemd[1]: Stopped ignition-mount.service. Feb 9 19:04:27.915723 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 9 19:04:27.915775 systemd[1]: Stopped ignition-disks.service. Feb 9 19:04:27.919882 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 9 19:04:27.919930 systemd[1]: Stopped ignition-kargs.service. Feb 9 19:04:27.924589 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 9 19:04:27.924633 systemd[1]: Stopped ignition-fetch.service. Feb 9 19:04:27.928288 systemd[1]: Stopped target network.target. Feb 9 19:04:27.933280 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 9 19:04:27.933341 systemd[1]: Stopped ignition-fetch-offline.service. Feb 9 19:04:27.937134 systemd[1]: Stopped target paths.target. Feb 9 19:04:27.940388 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 9 19:04:27.946207 systemd[1]: Stopped systemd-ask-password-console.path. Feb 9 19:04:27.985573 systemd[1]: Stopped target slices.target. Feb 9 19:04:27.989271 systemd[1]: Stopped target sockets.target. Feb 9 19:04:27.992489 systemd[1]: iscsid.socket: Deactivated successfully. Feb 9 19:04:27.992531 systemd[1]: Closed iscsid.socket. Feb 9 19:04:27.997228 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 9 19:04:27.997281 systemd[1]: Closed iscsiuio.socket. Feb 9 19:04:28.001859 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 9 19:04:28.003785 systemd[1]: Stopped ignition-setup.service. Feb 9 19:04:28.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.007287 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:04:28.011064 systemd[1]: Stopping systemd-resolved.service... Feb 9 19:04:28.016812 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 9 19:04:28.017301 systemd-networkd[811]: eth0: DHCPv6 lease lost Feb 9 19:04:28.018820 systemd[1]: Stopped systemd-resolved.service. Feb 9 19:04:28.022000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.023008 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:04:28.028000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.025010 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:04:28.030000 audit: BPF prog-id=6 op=UNLOAD Feb 9 19:04:28.030651 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 9 19:04:28.032000 audit: BPF prog-id=9 op=UNLOAD Feb 9 19:04:28.030694 systemd[1]: Closed systemd-networkd.socket. Feb 9 19:04:28.035504 systemd[1]: Stopping network-cleanup.service... Feb 9 19:04:28.042000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.038060 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 9 19:04:28.038120 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 9 19:04:28.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.043039 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:04:28.043095 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:04:28.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.049313 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 9 19:04:28.049364 systemd[1]: Stopped systemd-modules-load.service. Feb 9 19:04:28.058153 systemd[1]: Stopping systemd-udevd.service... Feb 9 19:04:28.067701 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 9 19:04:28.067856 systemd[1]: Stopped systemd-udevd.service. Feb 9 19:04:28.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.073652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 9 19:04:28.073699 systemd[1]: Closed systemd-udevd-control.socket. Feb 9 19:04:28.082000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.075488 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 9 19:04:28.086000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.075528 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 9 19:04:28.079573 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 9 19:04:28.090000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.079622 systemd[1]: Stopped dracut-pre-udev.service. Feb 9 19:04:28.083019 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 9 19:04:28.083071 systemd[1]: Stopped dracut-cmdline.service. Feb 9 19:04:28.086993 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 9 19:04:28.087041 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 9 19:04:28.106000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.091675 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 9 19:04:28.110000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.102546 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 9 19:04:28.114000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.102607 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 9 19:04:28.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.118000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.106599 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 9 19:04:28.106645 systemd[1]: Stopped kmod-static-nodes.service. Feb 9 19:04:28.110320 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 9 19:04:28.110374 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 9 19:04:28.114829 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 9 19:04:28.114918 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 9 19:04:28.141210 kernel: hv_netvsc 002248a1-87d9-0022-48a1-87d9002248a1 eth0: Data path switched from VF: enP48596s1 Feb 9 19:04:28.159437 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 9 19:04:28.163000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.159575 systemd[1]: Stopped network-cleanup.service. Feb 9 19:04:28.172225 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 9 19:04:28.176000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.172332 systemd[1]: Stopped sysroot-boot.service. Feb 9 19:04:28.178029 systemd[1]: Reached target initrd-switch-root.target. Feb 9 19:04:28.182249 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 9 19:04:28.184895 systemd[1]: Stopped initrd-setup-root.service. Feb 9 19:04:28.188000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:28.189374 systemd[1]: Starting initrd-switch-root.service... Feb 9 19:04:28.196000 audit: BPF prog-id=5 op=UNLOAD Feb 9 19:04:28.196000 audit: BPF prog-id=4 op=UNLOAD Feb 9 19:04:28.196000 audit: BPF prog-id=3 op=UNLOAD Feb 9 19:04:28.196098 systemd[1]: Switching root. Feb 9 19:04:28.202000 audit: BPF prog-id=8 op=UNLOAD Feb 9 19:04:28.202000 audit: BPF prog-id=7 op=UNLOAD Feb 9 19:04:28.224338 iscsid[818]: iscsid shutting down. Feb 9 19:04:28.225792 systemd-journald[183]: Received SIGTERM from PID 1 (n/a). Feb 9 19:04:28.225859 systemd-journald[183]: Journal stopped Feb 9 19:04:33.107515 kernel: SELinux: Class mctp_socket not defined in policy. Feb 9 19:04:33.107556 kernel: SELinux: Class anon_inode not defined in policy. Feb 9 19:04:33.107578 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 9 19:04:33.107592 kernel: SELinux: policy capability network_peer_controls=1 Feb 9 19:04:33.107606 kernel: SELinux: policy capability open_perms=1 Feb 9 19:04:33.107624 kernel: SELinux: policy capability extended_socket_class=1 Feb 9 19:04:33.107643 kernel: SELinux: policy capability always_check_network=0 Feb 9 19:04:33.107662 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 9 19:04:33.107680 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 9 19:04:33.107695 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 9 19:04:33.107714 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 9 19:04:33.107731 systemd[1]: Successfully loaded SELinux policy in 118.686ms. Feb 9 19:04:33.107751 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 11.938ms. Feb 9 19:04:33.107771 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 9 19:04:33.107795 systemd[1]: Detected virtualization microsoft. Feb 9 19:04:33.107812 systemd[1]: Detected architecture x86-64. Feb 9 19:04:33.107828 systemd[1]: Detected first boot. Feb 9 19:04:33.107846 systemd[1]: Hostname set to . Feb 9 19:04:33.107864 systemd[1]: Initializing machine ID from random generator. Feb 9 19:04:33.107885 kernel: kauditd_printk_skb: 42 callbacks suppressed Feb 9 19:04:33.107903 kernel: audit: type=1400 audit(1707505469.305:88): avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:04:33.107920 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 9 19:04:33.107939 kernel: audit: type=1400 audit(1707505469.683:89): avc: denied { associate } for pid=1054 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 9 19:04:33.107957 kernel: audit: type=1300 audit(1707505469.683:89): arch=c000003e syscall=188 success=yes exit=0 a0=c00014f672 a1=c0000d0af8 a2=c0000d8a00 a3=32 items=0 ppid=1037 pid=1054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:33.107974 kernel: audit: type=1327 audit(1707505469.683:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:04:33.107988 kernel: audit: type=1400 audit(1707505469.689:90): avc: denied { associate } for pid=1054 comm="torcx-generator" name="usr" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 9 19:04:33.108002 kernel: audit: type=1300 audit(1707505469.689:90): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014f749 a2=1ed a3=0 items=2 ppid=1037 pid=1054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:33.108017 kernel: audit: type=1307 audit(1707505469.689:90): cwd="/" Feb 9 19:04:33.108032 kernel: audit: type=1302 audit(1707505469.689:90): item=0 name=(null) inode=2 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:33.108048 kernel: audit: type=1302 audit(1707505469.689:90): item=1 name=(null) inode=3 dev=00:29 mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:33.108064 kernel: audit: type=1327 audit(1707505469.689:90): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 9 19:04:33.108082 systemd[1]: Populated /etc with preset unit settings. Feb 9 19:04:33.108098 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:04:33.108115 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:04:33.108130 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:04:33.108144 systemd[1]: Queued start job for default target multi-user.target. Feb 9 19:04:33.108158 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 9 19:04:33.108187 systemd[1]: Created slice system-addon\x2drun.slice. Feb 9 19:04:33.108201 systemd[1]: Created slice system-getty.slice. Feb 9 19:04:33.108216 systemd[1]: Created slice system-modprobe.slice. Feb 9 19:04:33.108236 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 9 19:04:33.108251 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 9 19:04:33.108267 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 9 19:04:33.108284 systemd[1]: Created slice user.slice. Feb 9 19:04:33.108301 systemd[1]: Started systemd-ask-password-console.path. Feb 9 19:04:33.108316 systemd[1]: Started systemd-ask-password-wall.path. Feb 9 19:04:33.108333 systemd[1]: Set up automount boot.automount. Feb 9 19:04:33.108348 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 9 19:04:33.108363 systemd[1]: Reached target integritysetup.target. Feb 9 19:04:33.108377 systemd[1]: Reached target remote-cryptsetup.target. Feb 9 19:04:33.108394 systemd[1]: Reached target remote-fs.target. Feb 9 19:04:33.108410 systemd[1]: Reached target slices.target. Feb 9 19:04:33.108426 systemd[1]: Reached target swap.target. Feb 9 19:04:33.108442 systemd[1]: Reached target torcx.target. Feb 9 19:04:33.108457 systemd[1]: Reached target veritysetup.target. Feb 9 19:04:33.108475 systemd[1]: Listening on systemd-coredump.socket. Feb 9 19:04:33.108490 systemd[1]: Listening on systemd-initctl.socket. Feb 9 19:04:33.108504 systemd[1]: Listening on systemd-journald-audit.socket. Feb 9 19:04:33.108519 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 9 19:04:33.108534 systemd[1]: Listening on systemd-journald.socket. Feb 9 19:04:33.108549 systemd[1]: Listening on systemd-networkd.socket. Feb 9 19:04:33.108565 systemd[1]: Listening on systemd-udevd-control.socket. Feb 9 19:04:33.108585 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 9 19:04:33.108602 systemd[1]: Listening on systemd-userdbd.socket. Feb 9 19:04:33.108619 systemd[1]: Mounting dev-hugepages.mount... Feb 9 19:04:33.108638 systemd[1]: Mounting dev-mqueue.mount... Feb 9 19:04:33.108654 systemd[1]: Mounting media.mount... Feb 9 19:04:33.108673 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:04:33.108688 systemd[1]: Mounting sys-kernel-debug.mount... Feb 9 19:04:33.108703 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 9 19:04:33.108719 systemd[1]: Mounting tmp.mount... Feb 9 19:04:33.108734 systemd[1]: Starting flatcar-tmpfiles.service... Feb 9 19:04:33.108750 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 9 19:04:33.108765 systemd[1]: Starting kmod-static-nodes.service... Feb 9 19:04:33.108780 systemd[1]: Starting modprobe@configfs.service... Feb 9 19:04:33.108796 systemd[1]: Starting modprobe@dm_mod.service... Feb 9 19:04:33.108815 systemd[1]: Starting modprobe@drm.service... Feb 9 19:04:33.108831 systemd[1]: Starting modprobe@efi_pstore.service... Feb 9 19:04:33.108848 systemd[1]: Starting modprobe@fuse.service... Feb 9 19:04:33.108862 systemd[1]: Starting modprobe@loop.service... Feb 9 19:04:33.108880 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 9 19:04:33.108898 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Feb 9 19:04:33.108915 systemd[1]: (This warning is only shown for the first unit using IP firewalling.) Feb 9 19:04:33.108932 systemd[1]: Starting systemd-journald.service... Feb 9 19:04:33.108949 kernel: fuse: init (API version 7.34) Feb 9 19:04:33.108968 systemd[1]: Starting systemd-modules-load.service... Feb 9 19:04:33.108987 systemd[1]: Starting systemd-network-generator.service... Feb 9 19:04:33.109003 kernel: loop: module loaded Feb 9 19:04:33.109019 systemd[1]: Starting systemd-remount-fs.service... Feb 9 19:04:33.109036 systemd[1]: Starting systemd-udev-trigger.service... Feb 9 19:04:33.109051 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 9 19:04:33.109076 systemd-journald[1171]: Journal started Feb 9 19:04:33.109147 systemd-journald[1171]: Runtime Journal (/run/log/journal/62239acf7c3c4be0b5c80059da15b5fe) is 8.0M, max 159.0M, 151.0M free. Feb 9 19:04:33.096000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 9 19:04:33.096000 audit[1171]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=5 a1=7ffc0db3ea50 a2=4000 a3=7ffc0db3eaec items=0 ppid=1 pid=1171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:33.096000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 9 19:04:33.115187 systemd[1]: Started systemd-journald.service. Feb 9 19:04:33.118000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.119612 systemd[1]: Mounted dev-hugepages.mount. Feb 9 19:04:33.121509 systemd[1]: Mounted dev-mqueue.mount. Feb 9 19:04:33.123299 systemd[1]: Mounted media.mount. Feb 9 19:04:33.125084 systemd[1]: Mounted sys-kernel-debug.mount. Feb 9 19:04:33.127490 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 9 19:04:33.129456 systemd[1]: Mounted tmp.mount. Feb 9 19:04:33.131260 systemd[1]: Finished flatcar-tmpfiles.service. Feb 9 19:04:33.133000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.133414 systemd[1]: Finished kmod-static-nodes.service. Feb 9 19:04:33.135000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.135608 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 9 19:04:33.135775 systemd[1]: Finished modprobe@configfs.service. Feb 9 19:04:33.137000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.137000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.138149 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 9 19:04:33.138411 systemd[1]: Finished modprobe@dm_mod.service. Feb 9 19:04:33.140000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.140668 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 9 19:04:33.140823 systemd[1]: Finished modprobe@drm.service. Feb 9 19:04:33.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.142000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.142998 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 9 19:04:33.143160 systemd[1]: Finished modprobe@efi_pstore.service. Feb 9 19:04:33.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.145000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.145579 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 9 19:04:33.145735 systemd[1]: Finished modprobe@fuse.service. Feb 9 19:04:33.147000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.147000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.147779 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 9 19:04:33.147956 systemd[1]: Finished modprobe@loop.service. Feb 9 19:04:33.149000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.149000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.150272 systemd[1]: Finished systemd-network-generator.service. Feb 9 19:04:33.152000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.152728 systemd[1]: Finished systemd-remount-fs.service. Feb 9 19:04:33.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.155390 systemd[1]: Reached target network-pre.target. Feb 9 19:04:33.164312 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 9 19:04:33.168262 systemd[1]: Mounting sys-kernel-config.mount... Feb 9 19:04:33.170158 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 9 19:04:33.181047 systemd[1]: Starting systemd-hwdb-update.service... Feb 9 19:04:33.184195 systemd[1]: Starting systemd-journal-flush.service... Feb 9 19:04:33.186186 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 9 19:04:33.187750 systemd[1]: Starting systemd-random-seed.service... Feb 9 19:04:33.190160 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 9 19:04:33.191794 systemd[1]: Starting systemd-sysusers.service... Feb 9 19:04:33.201818 systemd-journald[1171]: Time spent on flushing to /var/log/journal/62239acf7c3c4be0b5c80059da15b5fe is 71.103ms for 1084 entries. Feb 9 19:04:33.201818 systemd-journald[1171]: System Journal (/var/log/journal/62239acf7c3c4be0b5c80059da15b5fe) is 8.0M, max 2.6G, 2.6G free. Feb 9 19:04:33.321097 systemd-journald[1171]: Received client request to flush runtime journal. Feb 9 19:04:33.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.237000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.285000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.198270 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 9 19:04:33.200350 systemd[1]: Mounted sys-kernel-config.mount. Feb 9 19:04:33.222313 systemd[1]: Finished systemd-random-seed.service. Feb 9 19:04:33.224725 systemd[1]: Reached target first-boot-complete.target. Feb 9 19:04:33.235135 systemd[1]: Finished systemd-modules-load.service. Feb 9 19:04:33.239079 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:04:33.283623 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:04:33.324000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.322416 systemd[1]: Finished systemd-journal-flush.service. Feb 9 19:04:33.341000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.339217 systemd[1]: Finished systemd-udev-trigger.service. Feb 9 19:04:33.343241 systemd[1]: Starting systemd-udev-settle.service... Feb 9 19:04:33.356256 udevadm[1212]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 9 19:04:33.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.373235 systemd[1]: Finished systemd-sysusers.service. Feb 9 19:04:33.377266 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 9 19:04:33.457327 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 9 19:04:33.459000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.909000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.906708 systemd[1]: Finished systemd-hwdb-update.service. Feb 9 19:04:33.911272 systemd[1]: Starting systemd-udevd.service... Feb 9 19:04:33.929357 systemd-udevd[1218]: Using default interface naming scheme 'v252'. Feb 9 19:04:33.977000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:33.975386 systemd[1]: Started systemd-udevd.service. Feb 9 19:04:33.981516 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:34.006270 systemd[1]: Starting systemd-userdbd.service... Feb 9 19:04:34.056823 systemd[1]: Found device dev-ttyS0.device. Feb 9 19:04:34.063313 systemd[1]: Started systemd-userdbd.service. Feb 9 19:04:34.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:34.150187 kernel: mousedev: PS/2 mouse device common for all mice Feb 9 19:04:34.178000 audit[1237]: AVC avc: denied { confidentiality } for pid=1237 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 9 19:04:34.196186 kernel: hv_vmbus: registering driver hv_balloon Feb 9 19:04:34.210190 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 9 19:04:34.178000 audit[1237]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55f9214a0670 a1=f884 a2=7f688e572bc5 a3=5 items=12 ppid=1218 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:34.178000 audit: CWD cwd="/" Feb 9 19:04:34.178000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=1 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=2 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=3 name=(null) inode=15192 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.218260 kernel: hv_vmbus: registering driver hyperv_fb Feb 9 19:04:34.178000 audit: PATH item=4 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=5 name=(null) inode=15193 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=6 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=7 name=(null) inode=15194 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=8 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=9 name=(null) inode=15195 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=10 name=(null) inode=15191 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PATH item=11 name=(null) inode=15196 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 9 19:04:34.178000 audit: PROCTITLE proctitle="(udev-worker)" Feb 9 19:04:34.226825 kernel: hv_utils: Registering HyperV Utility Driver Feb 9 19:04:34.226911 kernel: hv_vmbus: registering driver hv_utils Feb 9 19:04:34.243982 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 9 19:04:34.244082 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 9 19:04:34.244105 kernel: hv_utils: Heartbeat IC version 3.0 Feb 9 19:04:34.244124 kernel: hv_utils: Shutdown IC version 3.2 Feb 9 19:04:34.840129 kernel: hv_utils: TimeSync IC version 4.0 Feb 9 19:04:34.842655 systemd-networkd[1232]: lo: Link UP Feb 9 19:04:34.842666 systemd-networkd[1232]: lo: Gained carrier Feb 9 19:04:34.843870 systemd-networkd[1232]: Enumeration completed Feb 9 19:04:34.844022 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:34.844000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:34.847923 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:04:34.855019 kernel: Console: switching to colour dummy device 80x25 Feb 9 19:04:34.865586 systemd-networkd[1232]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:34.874815 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1235) Feb 9 19:04:34.892824 kernel: Console: switching to colour frame buffer device 128x48 Feb 9 19:04:34.910796 kernel: mlx5_core bdd4:00:02.0 enP48596s1: Link up Feb 9 19:04:34.950002 kernel: hv_netvsc 002248a1-87d9-0022-48a1-87d9002248a1 eth0: Data path switched to VF: enP48596s1 Feb 9 19:04:34.951238 systemd-networkd[1232]: enP48596s1: Link UP Feb 9 19:04:34.951999 systemd-networkd[1232]: eth0: Link UP Feb 9 19:04:34.952107 systemd-networkd[1232]: eth0: Gained carrier Feb 9 19:04:34.956089 systemd-networkd[1232]: enP48596s1: Gained carrier Feb 9 19:04:34.980961 systemd-networkd[1232]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:35.012725 systemd[1]: dev-disk-by\x2dlabel-OEM.device was skipped because of an unmet condition check (ConditionPathExists=!/usr/.noupdate). Feb 9 19:04:35.059797 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 9 19:04:35.098334 systemd[1]: Finished systemd-udev-settle.service. Feb 9 19:04:35.099000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.103228 kernel: kauditd_printk_skb: 49 callbacks suppressed Feb 9 19:04:35.103268 kernel: audit: type=1130 audit(1707505475.099:123): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.102377 systemd[1]: Starting lvm2-activation-early.service... Feb 9 19:04:35.220377 lvm[1297]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:04:35.245100 systemd[1]: Finished lvm2-activation-early.service. Feb 9 19:04:35.246000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.247864 systemd[1]: Reached target cryptsetup.target. Feb 9 19:04:35.264996 kernel: audit: type=1130 audit(1707505475.246:124): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.263765 systemd[1]: Starting lvm2-activation.service... Feb 9 19:04:35.270989 lvm[1299]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 9 19:04:35.289950 systemd[1]: Finished lvm2-activation.service. Feb 9 19:04:35.292643 systemd[1]: Reached target local-fs-pre.target. Feb 9 19:04:35.303780 kernel: audit: type=1130 audit(1707505475.291:125): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.291000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.304369 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 9 19:04:35.304404 systemd[1]: Reached target local-fs.target. Feb 9 19:04:35.306550 systemd[1]: Reached target machines.target. Feb 9 19:04:35.309748 systemd[1]: Starting ldconfig.service... Feb 9 19:04:35.312032 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 9 19:04:35.312152 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:35.313417 systemd[1]: Starting systemd-boot-update.service... Feb 9 19:04:35.316822 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 9 19:04:35.320616 systemd[1]: Starting systemd-machine-id-commit.service... Feb 9 19:04:35.323571 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:04:35.323660 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 9 19:04:35.325167 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 9 19:04:35.336357 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1302 (bootctl) Feb 9 19:04:35.337917 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 9 19:04:35.516025 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 9 19:04:35.530545 kernel: audit: type=1130 audit(1707505475.516:126): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.516000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:35.912990 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 9 19:04:36.125101 systemd-networkd[1232]: eth0: Gained IPv6LL Feb 9 19:04:36.130983 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:04:36.131000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:36.142798 kernel: audit: type=1130 audit(1707505476.131:127): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:36.607172 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 9 19:04:36.612667 systemd-tmpfiles[1305]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 9 19:04:37.035000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.034319 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 9 19:04:37.035336 systemd[1]: Finished systemd-machine-id-commit.service. Feb 9 19:04:37.048826 kernel: audit: type=1130 audit(1707505477.035:128): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.185664 systemd-fsck[1312]: fsck.fat 4.2 (2021-01-31) Feb 9 19:04:37.185664 systemd-fsck[1312]: /dev/sda1: 789 files, 115339/258078 clusters Feb 9 19:04:37.188160 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 9 19:04:37.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.194178 systemd[1]: Mounting boot.mount... Feb 9 19:04:37.206423 kernel: audit: type=1130 audit(1707505477.190:129): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.211327 systemd[1]: Mounted boot.mount. Feb 9 19:04:37.228000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.227482 systemd[1]: Finished systemd-boot-update.service. Feb 9 19:04:37.241801 kernel: audit: type=1130 audit(1707505477.228:130): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.327539 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 9 19:04:37.329000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.331724 systemd[1]: Starting audit-rules.service... Feb 9 19:04:37.344312 kernel: audit: type=1130 audit(1707505477.329:131): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.345618 systemd[1]: Starting clean-ca-certificates.service... Feb 9 19:04:37.350344 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 9 19:04:37.354530 systemd[1]: Starting systemd-resolved.service... Feb 9 19:04:37.360046 systemd[1]: Starting systemd-timesyncd.service... Feb 9 19:04:37.364944 systemd[1]: Starting systemd-update-utmp.service... Feb 9 19:04:37.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.369215 systemd[1]: Finished clean-ca-certificates.service. Feb 9 19:04:37.372668 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 9 19:04:37.385877 kernel: audit: type=1130 audit(1707505477.370:132): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.392000 audit[1335]: SYSTEM_BOOT pid=1335 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.399200 systemd[1]: Finished systemd-update-utmp.service. Feb 9 19:04:37.400000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.468683 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 9 19:04:37.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 9 19:04:37.472000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 9 19:04:37.472000 audit[1345]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffd1d4301c0 a2=420 a3=0 items=0 ppid=1323 pid=1345 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 9 19:04:37.472000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 9 19:04:37.474966 systemd[1]: Finished audit-rules.service. Feb 9 19:04:37.476122 augenrules[1345]: No rules Feb 9 19:04:37.506289 systemd-resolved[1328]: Positive Trust Anchors: Feb 9 19:04:37.506707 systemd-resolved[1328]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 9 19:04:37.506880 systemd-resolved[1328]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 9 19:04:37.510767 systemd[1]: Started systemd-timesyncd.service. Feb 9 19:04:37.513040 systemd[1]: Reached target time-set.target. Feb 9 19:04:37.559953 systemd-resolved[1328]: Using system hostname 'ci-3510.3.2-a-92fe98b439'. Feb 9 19:04:37.561646 systemd[1]: Started systemd-resolved.service. Feb 9 19:04:37.563902 systemd[1]: Reached target network.target. Feb 9 19:04:37.565832 systemd[1]: Reached target network-online.target. Feb 9 19:04:37.568024 systemd[1]: Reached target nss-lookup.target. Feb 9 19:04:37.695077 systemd-timesyncd[1334]: Contacted time server 162.159.200.1:123 (0.flatcar.pool.ntp.org). Feb 9 19:04:37.695586 systemd-timesyncd[1334]: Initial clock synchronization to Fri 2024-02-09 19:04:37.696142 UTC. Feb 9 19:04:38.729251 ldconfig[1301]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 9 19:04:38.740916 systemd[1]: Finished ldconfig.service. Feb 9 19:04:38.745468 systemd[1]: Starting systemd-update-done.service... Feb 9 19:04:38.752323 systemd[1]: Finished systemd-update-done.service. Feb 9 19:04:38.754520 systemd[1]: Reached target sysinit.target. Feb 9 19:04:38.756898 systemd[1]: Started motdgen.path. Feb 9 19:04:38.758518 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 9 19:04:38.761303 systemd[1]: Started logrotate.timer. Feb 9 19:04:38.763086 systemd[1]: Started mdadm.timer. Feb 9 19:04:38.764843 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 9 19:04:38.766697 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 9 19:04:38.766743 systemd[1]: Reached target paths.target. Feb 9 19:04:38.768364 systemd[1]: Reached target timers.target. Feb 9 19:04:38.773228 systemd[1]: Listening on dbus.socket. Feb 9 19:04:38.776181 systemd[1]: Starting docker.socket... Feb 9 19:04:38.779711 systemd[1]: Listening on sshd.socket. Feb 9 19:04:38.781760 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:38.782299 systemd[1]: Listening on docker.socket. Feb 9 19:04:38.784242 systemd[1]: Reached target sockets.target. Feb 9 19:04:38.786081 systemd[1]: Reached target basic.target. Feb 9 19:04:38.788397 systemd[1]: System is tainted: cgroupsv1 Feb 9 19:04:38.788459 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:04:38.788490 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 9 19:04:38.789647 systemd[1]: Starting containerd.service... Feb 9 19:04:38.793119 systemd[1]: Starting dbus.service... Feb 9 19:04:38.796010 systemd[1]: Starting enable-oem-cloudinit.service... Feb 9 19:04:38.799213 systemd[1]: Starting extend-filesystems.service... Feb 9 19:04:38.801271 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 9 19:04:38.802618 systemd[1]: Starting motdgen.service... Feb 9 19:04:38.806186 systemd[1]: Started nvidia.service. Feb 9 19:04:38.809853 systemd[1]: Starting prepare-cni-plugins.service... Feb 9 19:04:38.818720 systemd[1]: Starting prepare-critools.service... Feb 9 19:04:38.824371 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 9 19:04:38.828653 systemd[1]: Starting sshd-keygen.service... Feb 9 19:04:38.837529 systemd[1]: Starting systemd-logind.service... Feb 9 19:04:38.840137 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 9 19:04:38.840235 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 9 19:04:38.841832 systemd[1]: Starting update-engine.service... Feb 9 19:04:38.846017 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 9 19:04:38.863814 jq[1361]: false Feb 9 19:04:38.857659 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 9 19:04:38.858030 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 9 19:04:38.871591 jq[1379]: true Feb 9 19:04:38.873170 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 9 19:04:38.873483 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 9 19:04:38.903072 jq[1397]: true Feb 9 19:04:38.910283 tar[1382]: ./ Feb 9 19:04:38.910283 tar[1382]: ./macvlan Feb 9 19:04:38.915837 tar[1383]: crictl Feb 9 19:04:38.920278 extend-filesystems[1362]: Found sda Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda1 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda2 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda3 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found usr Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda4 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda6 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda7 Feb 9 19:04:38.923223 extend-filesystems[1362]: Found sda9 Feb 9 19:04:38.923223 extend-filesystems[1362]: Checking size of /dev/sda9 Feb 9 19:04:38.940127 systemd[1]: motdgen.service: Deactivated successfully. Feb 9 19:04:38.954836 dbus-daemon[1360]: [system] SELinux support is enabled Feb 9 19:04:39.004700 extend-filesystems[1362]: Old size kept for /dev/sda9 Feb 9 19:04:39.004700 extend-filesystems[1362]: Found sr0 Feb 9 19:04:38.940421 systemd[1]: Finished motdgen.service. Feb 9 19:04:38.955035 systemd[1]: Started dbus.service. Feb 9 19:04:38.959588 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 9 19:04:38.959619 systemd[1]: Reached target system-config.target. Feb 9 19:04:38.964034 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 9 19:04:38.964063 systemd[1]: Reached target user-config.target. Feb 9 19:04:38.990524 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 9 19:04:38.990835 systemd[1]: Finished extend-filesystems.service. Feb 9 19:04:39.081759 bash[1428]: Updated "/home/core/.ssh/authorized_keys" Feb 9 19:04:39.082203 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 9 19:04:39.099561 env[1392]: time="2024-02-09T19:04:39.099499407Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 9 19:04:39.107078 systemd[1]: nvidia.service: Deactivated successfully. Feb 9 19:04:39.113019 systemd-logind[1375]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 9 19:04:39.116982 systemd-logind[1375]: New seat seat0. Feb 9 19:04:39.132652 systemd[1]: Started systemd-logind.service. Feb 9 19:04:39.146088 env[1392]: time="2024-02-09T19:04:39.146037760Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 9 19:04:39.146244 env[1392]: time="2024-02-09T19:04:39.146221273Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148056 env[1392]: time="2024-02-09T19:04:39.148011298Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148056 env[1392]: time="2024-02-09T19:04:39.148052901Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148431 env[1392]: time="2024-02-09T19:04:39.148398225Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148504 env[1392]: time="2024-02-09T19:04:39.148432028Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148504 env[1392]: time="2024-02-09T19:04:39.148450929Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 9 19:04:39.148504 env[1392]: time="2024-02-09T19:04:39.148464630Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148676 env[1392]: time="2024-02-09T19:04:39.148574438Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.148920 env[1392]: time="2024-02-09T19:04:39.148892160Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 9 19:04:39.149183 env[1392]: time="2024-02-09T19:04:39.149152478Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 9 19:04:39.149251 env[1392]: time="2024-02-09T19:04:39.149185380Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 9 19:04:39.149296 env[1392]: time="2024-02-09T19:04:39.149256885Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 9 19:04:39.149296 env[1392]: time="2024-02-09T19:04:39.149275087Z" level=info msg="metadata content store policy set" policy=shared Feb 9 19:04:39.164209 tar[1382]: ./static Feb 9 19:04:39.168344 env[1392]: time="2024-02-09T19:04:39.168293416Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 9 19:04:39.168464 env[1392]: time="2024-02-09T19:04:39.168366921Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 9 19:04:39.168464 env[1392]: time="2024-02-09T19:04:39.168384923Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 9 19:04:39.168546 env[1392]: time="2024-02-09T19:04:39.168492030Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168585 env[1392]: time="2024-02-09T19:04:39.168548734Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168623 env[1392]: time="2024-02-09T19:04:39.168570236Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168623 env[1392]: time="2024-02-09T19:04:39.168604238Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168701 env[1392]: time="2024-02-09T19:04:39.168629440Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168701 env[1392]: time="2024-02-09T19:04:39.168650841Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168701 env[1392]: time="2024-02-09T19:04:39.168681943Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168816 env[1392]: time="2024-02-09T19:04:39.168702645Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.168816 env[1392]: time="2024-02-09T19:04:39.168724646Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 9 19:04:39.168937 env[1392]: time="2024-02-09T19:04:39.168913060Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 9 19:04:39.169089 env[1392]: time="2024-02-09T19:04:39.169067570Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 9 19:04:39.169710 env[1392]: time="2024-02-09T19:04:39.169683513Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 9 19:04:39.169789 env[1392]: time="2024-02-09T19:04:39.169741317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.169834 env[1392]: time="2024-02-09T19:04:39.169766519Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 9 19:04:39.169957 env[1392]: time="2024-02-09T19:04:39.169937131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170004 env[1392]: time="2024-02-09T19:04:39.169964633Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170004 env[1392]: time="2024-02-09T19:04:39.169984534Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170086 env[1392]: time="2024-02-09T19:04:39.170015237Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170086 env[1392]: time="2024-02-09T19:04:39.170049339Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170159 env[1392]: time="2024-02-09T19:04:39.170070941Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170159 env[1392]: time="2024-02-09T19:04:39.170105143Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170159 env[1392]: time="2024-02-09T19:04:39.170124144Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170269 env[1392]: time="2024-02-09T19:04:39.170149046Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 9 19:04:39.170372 env[1392]: time="2024-02-09T19:04:39.170352760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170427 env[1392]: time="2024-02-09T19:04:39.170380262Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170427 env[1392]: time="2024-02-09T19:04:39.170413664Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170500 env[1392]: time="2024-02-09T19:04:39.170442767Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 9 19:04:39.170500 env[1392]: time="2024-02-09T19:04:39.170466768Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 9 19:04:39.170575 env[1392]: time="2024-02-09T19:04:39.170496870Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 9 19:04:39.170575 env[1392]: time="2024-02-09T19:04:39.170522772Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 9 19:04:39.170645 env[1392]: time="2024-02-09T19:04:39.170580676Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 9 19:04:39.170976 env[1392]: time="2024-02-09T19:04:39.170898198Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.170991705Z" level=info msg="Connect containerd service" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.171047309Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.171996075Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172131385Z" level=info msg="Start subscribing containerd event" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172200989Z" level=info msg="Start recovering state" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172282095Z" level=info msg="Start event monitor" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172296296Z" level=info msg="Start snapshots syncer" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172307597Z" level=info msg="Start cni network conf syncer for default" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172318598Z" level=info msg="Start streaming server" Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172839934Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 9 19:04:39.180601 env[1392]: time="2024-02-09T19:04:39.172975744Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 9 19:04:39.173174 systemd[1]: Started containerd.service. Feb 9 19:04:39.181124 update_engine[1377]: I0209 19:04:39.172223 1377 main.cc:92] Flatcar Update Engine starting Feb 9 19:04:39.189949 systemd[1]: Started update-engine.service. Feb 9 19:04:39.190245 update_engine[1377]: I0209 19:04:39.190018 1377 update_check_scheduler.cc:74] Next update check in 2m33s Feb 9 19:04:39.194738 systemd[1]: Started locksmithd.service. Feb 9 19:04:39.200706 env[1392]: time="2024-02-09T19:04:39.200651279Z" level=info msg="containerd successfully booted in 0.102027s" Feb 9 19:04:39.261854 tar[1382]: ./vlan Feb 9 19:04:39.370307 tar[1382]: ./portmap Feb 9 19:04:39.453880 tar[1382]: ./host-local Feb 9 19:04:39.517806 tar[1382]: ./vrf Feb 9 19:04:39.601009 tar[1382]: ./bridge Feb 9 19:04:39.683902 tar[1382]: ./tuning Feb 9 19:04:39.715914 systemd[1]: Finished prepare-critools.service. Feb 9 19:04:39.750453 tar[1382]: ./firewall Feb 9 19:04:39.795673 tar[1382]: ./host-device Feb 9 19:04:39.835134 tar[1382]: ./sbr Feb 9 19:04:39.871792 tar[1382]: ./loopback Feb 9 19:04:39.906139 tar[1382]: ./dhcp Feb 9 19:04:39.908442 sshd_keygen[1393]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 9 19:04:39.943258 systemd[1]: Finished sshd-keygen.service. Feb 9 19:04:39.948170 systemd[1]: Starting issuegen.service... Feb 9 19:04:39.951825 systemd[1]: Started waagent.service. Feb 9 19:04:39.957378 systemd[1]: issuegen.service: Deactivated successfully. Feb 9 19:04:39.957678 systemd[1]: Finished issuegen.service. Feb 9 19:04:39.965010 systemd[1]: Starting systemd-user-sessions.service... Feb 9 19:04:39.977372 systemd[1]: Finished systemd-user-sessions.service. Feb 9 19:04:39.981557 systemd[1]: Started getty@tty1.service. Feb 9 19:04:39.985475 systemd[1]: Started serial-getty@ttyS0.service. Feb 9 19:04:39.987993 systemd[1]: Reached target getty.target. Feb 9 19:04:40.047499 tar[1382]: ./ptp Feb 9 19:04:40.081616 tar[1382]: ./ipvlan Feb 9 19:04:40.116295 tar[1382]: ./bandwidth Feb 9 19:04:40.168376 systemd[1]: Finished prepare-cni-plugins.service. Feb 9 19:04:40.171234 systemd[1]: Reached target multi-user.target. Feb 9 19:04:40.175117 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 9 19:04:40.183224 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 9 19:04:40.183491 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 9 19:04:40.188634 systemd[1]: Startup finished in 648ms (firmware) + 6.692s (loader) + 10.067s (kernel) + 10.739s (userspace) = 28.147s. Feb 9 19:04:40.261354 locksmithd[1445]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 9 19:04:40.279571 login[1486]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:04:40.279746 login[1485]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 9 19:04:40.294420 systemd[1]: Created slice user-500.slice. Feb 9 19:04:40.295935 systemd[1]: Starting user-runtime-dir@500.service... Feb 9 19:04:40.299564 systemd-logind[1375]: New session 2 of user core. Feb 9 19:04:40.303278 systemd-logind[1375]: New session 1 of user core. Feb 9 19:04:40.316677 systemd[1]: Finished user-runtime-dir@500.service. Feb 9 19:04:40.318682 systemd[1]: Starting user@500.service... Feb 9 19:04:40.326593 (systemd)[1501]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:04:40.441580 systemd[1501]: Queued start job for default target default.target. Feb 9 19:04:40.441948 systemd[1501]: Reached target paths.target. Feb 9 19:04:40.441971 systemd[1501]: Reached target sockets.target. Feb 9 19:04:40.441989 systemd[1501]: Reached target timers.target. Feb 9 19:04:40.442004 systemd[1501]: Reached target basic.target. Feb 9 19:04:40.442181 systemd[1]: Started user@500.service. Feb 9 19:04:40.443459 systemd[1]: Started session-1.scope. Feb 9 19:04:40.444279 systemd[1]: Started session-2.scope. Feb 9 19:04:40.444745 systemd[1501]: Reached target default.target. Feb 9 19:04:40.444999 systemd[1501]: Startup finished in 108ms. Feb 9 19:04:43.004812 waagent[1477]: 2024-02-09T19:04:43.004656Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 9 19:04:43.016337 waagent[1477]: 2024-02-09T19:04:43.006869Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 9 19:04:43.016337 waagent[1477]: 2024-02-09T19:04:43.007600Z INFO Daemon Daemon Python: 3.9.16 Feb 9 19:04:43.016337 waagent[1477]: 2024-02-09T19:04:43.008614Z INFO Daemon Daemon Run daemon Feb 9 19:04:43.016337 waagent[1477]: 2024-02-09T19:04:43.009738Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 9 19:04:43.021218 waagent[1477]: 2024-02-09T19:04:43.021096Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:04:43.027605 waagent[1477]: 2024-02-09T19:04:43.027496Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.028874Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.029559Z INFO Daemon Daemon Using waagent for provisioning Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.030870Z INFO Daemon Daemon Activate resource disk Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.031724Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.039417Z INFO Daemon Daemon Found device: None Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.040255Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.041015Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.042637Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:04:43.054017 waagent[1477]: 2024-02-09T19:04:43.043521Z INFO Daemon Daemon Running default provisioning handler Feb 9 19:04:43.056644 waagent[1477]: 2024-02-09T19:04:43.056533Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 9 19:04:43.062752 waagent[1477]: 2024-02-09T19:04:43.062643Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 9 19:04:43.070452 waagent[1477]: 2024-02-09T19:04:43.063927Z INFO Daemon Daemon cloud-init is enabled: False Feb 9 19:04:43.070452 waagent[1477]: 2024-02-09T19:04:43.064742Z INFO Daemon Daemon Copying ovf-env.xml Feb 9 19:04:43.122744 waagent[1477]: 2024-02-09T19:04:43.121511Z INFO Daemon Daemon Successfully mounted dvd Feb 9 19:04:43.156303 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 9 19:04:43.164431 waagent[1477]: 2024-02-09T19:04:43.164308Z INFO Daemon Daemon Detect protocol endpoint Feb 9 19:04:43.167419 waagent[1477]: 2024-02-09T19:04:43.167343Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 9 19:04:43.170313 waagent[1477]: 2024-02-09T19:04:43.170244Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 9 19:04:43.173291 waagent[1477]: 2024-02-09T19:04:43.173223Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 9 19:04:43.180334 waagent[1477]: 2024-02-09T19:04:43.174897Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 9 19:04:43.180334 waagent[1477]: 2024-02-09T19:04:43.175641Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 9 19:04:43.201185 waagent[1477]: 2024-02-09T19:04:43.201123Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 9 19:04:43.204710 waagent[1477]: 2024-02-09T19:04:43.204665Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 9 19:04:43.207352 waagent[1477]: 2024-02-09T19:04:43.207295Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 9 19:04:43.681515 waagent[1477]: 2024-02-09T19:04:43.681349Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 9 19:04:43.695448 waagent[1477]: 2024-02-09T19:04:43.695351Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 9 19:04:43.698520 waagent[1477]: 2024-02-09T19:04:43.698438Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 9 19:04:43.770559 waagent[1477]: 2024-02-09T19:04:43.770421Z INFO Daemon Daemon Found private key matching thumbprint DBF3613946EB96367A4DB686CD9D5075624D0D5C Feb 9 19:04:43.779584 waagent[1477]: 2024-02-09T19:04:43.771679Z INFO Daemon Daemon Certificate with thumbprint 44AD5A8DD81F4F0ABFD7C89CEB0D5AD44443D6BB has no matching private key. Feb 9 19:04:43.779584 waagent[1477]: 2024-02-09T19:04:43.772513Z INFO Daemon Daemon Fetch goal state completed Feb 9 19:04:43.795532 waagent[1477]: 2024-02-09T19:04:43.795458Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: f0600dd2-5077-4c49-a4b2-772f7a9d11e3 New eTag: 2093877912559131175] Feb 9 19:04:43.802489 waagent[1477]: 2024-02-09T19:04:43.797118Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:04:43.808143 waagent[1477]: 2024-02-09T19:04:43.808081Z INFO Daemon Daemon Starting provisioning Feb 9 19:04:43.814286 waagent[1477]: 2024-02-09T19:04:43.809236Z INFO Daemon Daemon Handle ovf-env.xml. Feb 9 19:04:43.814286 waagent[1477]: 2024-02-09T19:04:43.810090Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-92fe98b439] Feb 9 19:04:43.817728 waagent[1477]: 2024-02-09T19:04:43.817630Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-92fe98b439] Feb 9 19:04:43.824865 waagent[1477]: 2024-02-09T19:04:43.819053Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 9 19:04:43.824865 waagent[1477]: 2024-02-09T19:04:43.820397Z INFO Daemon Daemon Primary interface is [eth0] Feb 9 19:04:43.834002 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 9 19:04:43.834318 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 9 19:04:43.834398 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 9 19:04:43.834703 systemd[1]: Stopping systemd-networkd.service... Feb 9 19:04:43.840827 systemd-networkd[1232]: eth0: DHCPv6 lease lost Feb 9 19:04:43.842339 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 9 19:04:43.842673 systemd[1]: Stopped systemd-networkd.service. Feb 9 19:04:43.845621 systemd[1]: Starting systemd-networkd.service... Feb 9 19:04:43.882199 systemd-networkd[1548]: enP48596s1: Link UP Feb 9 19:04:43.882209 systemd-networkd[1548]: enP48596s1: Gained carrier Feb 9 19:04:43.883583 systemd-networkd[1548]: eth0: Link UP Feb 9 19:04:43.883591 systemd-networkd[1548]: eth0: Gained carrier Feb 9 19:04:43.884201 systemd-networkd[1548]: lo: Link UP Feb 9 19:04:43.884211 systemd-networkd[1548]: lo: Gained carrier Feb 9 19:04:43.884540 systemd-networkd[1548]: eth0: Gained IPv6LL Feb 9 19:04:43.884859 systemd-networkd[1548]: Enumeration completed Feb 9 19:04:43.884989 systemd[1]: Started systemd-networkd.service. Feb 9 19:04:43.887830 waagent[1477]: 2024-02-09T19:04:43.886640Z INFO Daemon Daemon Create user account if not exists Feb 9 19:04:43.888448 waagent[1477]: 2024-02-09T19:04:43.888378Z INFO Daemon Daemon User core already exists, skip useradd Feb 9 19:04:43.889232 waagent[1477]: 2024-02-09T19:04:43.889181Z INFO Daemon Daemon Configure sudoer Feb 9 19:04:43.890609 waagent[1477]: 2024-02-09T19:04:43.890553Z INFO Daemon Daemon Configure sshd Feb 9 19:04:43.891555 waagent[1477]: 2024-02-09T19:04:43.891504Z INFO Daemon Daemon Deploy ssh public key. Feb 9 19:04:43.899027 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 9 19:04:43.905027 systemd-networkd[1548]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 9 19:04:43.935868 systemd-networkd[1548]: eth0: DHCPv4 address 10.200.8.19/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 9 19:04:43.938547 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 9 19:05:14.200494 waagent[1477]: 2024-02-09T19:05:14.200374Z INFO Daemon Daemon Provisioning complete Feb 9 19:05:14.219212 waagent[1477]: 2024-02-09T19:05:14.219122Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 9 19:05:14.223616 waagent[1477]: 2024-02-09T19:05:14.223529Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 9 19:05:14.228758 waagent[1477]: 2024-02-09T19:05:14.228675Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 9 19:05:14.497403 waagent[1558]: 2024-02-09T19:05:14.497206Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 9 19:05:14.498201 waagent[1558]: 2024-02-09T19:05:14.498127Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:14.498354 waagent[1558]: 2024-02-09T19:05:14.498300Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:14.509928 waagent[1558]: 2024-02-09T19:05:14.509857Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 9 19:05:14.510094 waagent[1558]: 2024-02-09T19:05:14.510041Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 9 19:05:14.574118 waagent[1558]: 2024-02-09T19:05:14.573981Z INFO ExtHandler ExtHandler Found private key matching thumbprint DBF3613946EB96367A4DB686CD9D5075624D0D5C Feb 9 19:05:14.574349 waagent[1558]: 2024-02-09T19:05:14.574284Z INFO ExtHandler ExtHandler Certificate with thumbprint 44AD5A8DD81F4F0ABFD7C89CEB0D5AD44443D6BB has no matching private key. Feb 9 19:05:14.574590 waagent[1558]: 2024-02-09T19:05:14.574539Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 9 19:05:14.596183 waagent[1558]: 2024-02-09T19:05:14.596109Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 5ce1f4fa-9023-40cc-a8f5-c12c0a5d1db5 New eTag: 2093877912559131175] Feb 9 19:05:14.596826 waagent[1558]: 2024-02-09T19:05:14.596747Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 9 19:05:14.640816 waagent[1558]: 2024-02-09T19:05:14.640658Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:05:14.652587 waagent[1558]: 2024-02-09T19:05:14.652493Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1558 Feb 9 19:05:14.656091 waagent[1558]: 2024-02-09T19:05:14.656020Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:05:14.657339 waagent[1558]: 2024-02-09T19:05:14.657277Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:05:14.681388 waagent[1558]: 2024-02-09T19:05:14.681326Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:05:14.681818 waagent[1558]: 2024-02-09T19:05:14.681729Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:05:14.690020 waagent[1558]: 2024-02-09T19:05:14.689966Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:05:14.690509 waagent[1558]: 2024-02-09T19:05:14.690448Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:05:14.691605 waagent[1558]: 2024-02-09T19:05:14.691539Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 9 19:05:14.692964 waagent[1558]: 2024-02-09T19:05:14.692905Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:05:14.693570 waagent[1558]: 2024-02-09T19:05:14.693512Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:05:14.694408 waagent[1558]: 2024-02-09T19:05:14.694354Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:14.694551 waagent[1558]: 2024-02-09T19:05:14.694492Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:05:14.694814 waagent[1558]: 2024-02-09T19:05:14.694744Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:14.695004 waagent[1558]: 2024-02-09T19:05:14.694952Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:05:14.695656 waagent[1558]: 2024-02-09T19:05:14.695596Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:05:14.695752 waagent[1558]: 2024-02-09T19:05:14.695691Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:14.696293 waagent[1558]: 2024-02-09T19:05:14.696241Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:14.696919 waagent[1558]: 2024-02-09T19:05:14.696859Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:05:14.697502 waagent[1558]: 2024-02-09T19:05:14.697443Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:05:14.697502 waagent[1558]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:05:14.697502 waagent[1558]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:05:14.697502 waagent[1558]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:05:14.697502 waagent[1558]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:14.697502 waagent[1558]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:14.697502 waagent[1558]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:14.698022 waagent[1558]: 2024-02-09T19:05:14.697962Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:05:14.698137 waagent[1558]: 2024-02-09T19:05:14.698066Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:05:14.700407 waagent[1558]: 2024-02-09T19:05:14.700179Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:05:14.702022 waagent[1558]: 2024-02-09T19:05:14.701955Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:05:14.705731 waagent[1558]: 2024-02-09T19:05:14.703968Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:05:14.718892 waagent[1558]: 2024-02-09T19:05:14.718240Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 9 19:05:14.719125 waagent[1558]: 2024-02-09T19:05:14.719059Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:05:14.719990 waagent[1558]: 2024-02-09T19:05:14.719933Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 9 19:05:14.722642 waagent[1558]: 2024-02-09T19:05:14.722577Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1548' Feb 9 19:05:14.742013 waagent[1558]: 2024-02-09T19:05:14.741887Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:05:14.742013 waagent[1558]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:05:14.742013 waagent[1558]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:05:14.742013 waagent[1558]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:87:d9 brd ff:ff:ff:ff:ff:ff Feb 9 19:05:14.742013 waagent[1558]: 3: enP48596s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:87:d9 brd ff:ff:ff:ff:ff:ff\ altname enP48596p0s2 Feb 9 19:05:14.742013 waagent[1558]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:05:14.742013 waagent[1558]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:05:14.742013 waagent[1558]: 2: eth0 inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:05:14.742013 waagent[1558]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:05:14.742013 waagent[1558]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:05:14.742013 waagent[1558]: 2: eth0 inet6 fe80::222:48ff:fea1:87d9/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:05:14.774734 waagent[1558]: 2024-02-09T19:05:14.774593Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 9 19:05:14.857487 waagent[1558]: 2024-02-09T19:05:14.857363Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules Feb 9 19:05:14.860621 waagent[1558]: 2024-02-09T19:05:14.860516Z INFO EnvHandler ExtHandler Firewall rules: Feb 9 19:05:14.860621 waagent[1558]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:14.860621 waagent[1558]: pkts bytes target prot opt in out source destination Feb 9 19:05:14.860621 waagent[1558]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:14.860621 waagent[1558]: pkts bytes target prot opt in out source destination Feb 9 19:05:14.860621 waagent[1558]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:14.860621 waagent[1558]: pkts bytes target prot opt in out source destination Feb 9 19:05:14.860621 waagent[1558]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:05:14.860621 waagent[1558]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:05:14.862445 waagent[1558]: 2024-02-09T19:05:14.862388Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 9 19:05:15.100826 waagent[1558]: 2024-02-09T19:05:15.100670Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 9 19:05:15.232493 waagent[1477]: 2024-02-09T19:05:15.232290Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 9 19:05:15.238003 waagent[1477]: 2024-02-09T19:05:15.237939Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 9 19:05:16.214450 waagent[1596]: 2024-02-09T19:05:16.214332Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 9 19:05:16.215200 waagent[1596]: 2024-02-09T19:05:16.215131Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 9 19:05:16.215346 waagent[1596]: 2024-02-09T19:05:16.215294Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 9 19:05:16.225082 waagent[1596]: 2024-02-09T19:05:16.224983Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 9 19:05:16.225467 waagent[1596]: 2024-02-09T19:05:16.225409Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:16.225632 waagent[1596]: 2024-02-09T19:05:16.225579Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:16.237209 waagent[1596]: 2024-02-09T19:05:16.237137Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 9 19:05:16.245680 waagent[1596]: 2024-02-09T19:05:16.245618Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 9 19:05:16.246591 waagent[1596]: 2024-02-09T19:05:16.246531Z INFO ExtHandler Feb 9 19:05:16.246740 waagent[1596]: 2024-02-09T19:05:16.246691Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 607c1fab-866c-4415-be13-6ed91bcb8711 eTag: 2093877912559131175 source: Fabric] Feb 9 19:05:16.247438 waagent[1596]: 2024-02-09T19:05:16.247381Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 9 19:05:16.248518 waagent[1596]: 2024-02-09T19:05:16.248456Z INFO ExtHandler Feb 9 19:05:16.248651 waagent[1596]: 2024-02-09T19:05:16.248599Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 9 19:05:16.255387 waagent[1596]: 2024-02-09T19:05:16.255335Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 9 19:05:16.255830 waagent[1596]: 2024-02-09T19:05:16.255766Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 9 19:05:16.278189 waagent[1596]: 2024-02-09T19:05:16.278119Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 9 19:05:16.345027 waagent[1596]: 2024-02-09T19:05:16.344888Z INFO ExtHandler Downloaded certificate {'thumbprint': 'DBF3613946EB96367A4DB686CD9D5075624D0D5C', 'hasPrivateKey': True} Feb 9 19:05:16.346078 waagent[1596]: 2024-02-09T19:05:16.346008Z INFO ExtHandler Downloaded certificate {'thumbprint': '44AD5A8DD81F4F0ABFD7C89CEB0D5AD44443D6BB', 'hasPrivateKey': False} Feb 9 19:05:16.347068 waagent[1596]: 2024-02-09T19:05:16.347009Z INFO ExtHandler Fetch goal state completed Feb 9 19:05:16.370750 waagent[1596]: 2024-02-09T19:05:16.370662Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1596 Feb 9 19:05:16.374117 waagent[1596]: 2024-02-09T19:05:16.374046Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 9 19:05:16.375551 waagent[1596]: 2024-02-09T19:05:16.375495Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 9 19:05:16.380961 waagent[1596]: 2024-02-09T19:05:16.380905Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 9 19:05:16.381345 waagent[1596]: 2024-02-09T19:05:16.381288Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 9 19:05:16.389746 waagent[1596]: 2024-02-09T19:05:16.389690Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 9 19:05:16.390234 waagent[1596]: 2024-02-09T19:05:16.390175Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 9 19:05:16.402868 waagent[1596]: 2024-02-09T19:05:16.402750Z INFO ExtHandler ExtHandler Firewall rule to allow DNS TCP request to wireserver for a non root user unavailable. Setting it now. Feb 9 19:05:16.405635 waagent[1596]: 2024-02-09T19:05:16.405534Z INFO ExtHandler ExtHandler Succesfully added firewall rule to allow non root users to do a DNS TCP request to wireserver Feb 9 19:05:16.410381 waagent[1596]: 2024-02-09T19:05:16.410319Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 9 19:05:16.411837 waagent[1596]: 2024-02-09T19:05:16.411763Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 9 19:05:16.412646 waagent[1596]: 2024-02-09T19:05:16.412590Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:16.412828 waagent[1596]: 2024-02-09T19:05:16.412754Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:16.413264 waagent[1596]: 2024-02-09T19:05:16.413210Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 9 19:05:16.413837 waagent[1596]: 2024-02-09T19:05:16.413758Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 9 19:05:16.414146 waagent[1596]: 2024-02-09T19:05:16.414093Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 9 19:05:16.414827 waagent[1596]: 2024-02-09T19:05:16.414749Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 9 19:05:16.415107 waagent[1596]: 2024-02-09T19:05:16.415055Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 9 19:05:16.415192 waagent[1596]: 2024-02-09T19:05:16.415136Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 9 19:05:16.415300 waagent[1596]: 2024-02-09T19:05:16.415253Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 9 19:05:16.415300 waagent[1596]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 9 19:05:16.415300 waagent[1596]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 9 19:05:16.415300 waagent[1596]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 9 19:05:16.415300 waagent[1596]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:16.415300 waagent[1596]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:16.415300 waagent[1596]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 9 19:05:16.416029 waagent[1596]: 2024-02-09T19:05:16.415972Z INFO EnvHandler ExtHandler Configure routes Feb 9 19:05:16.418993 waagent[1596]: 2024-02-09T19:05:16.418880Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 9 19:05:16.419125 waagent[1596]: 2024-02-09T19:05:16.419047Z INFO EnvHandler ExtHandler Gateway:None Feb 9 19:05:16.419238 waagent[1596]: 2024-02-09T19:05:16.419176Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 9 19:05:16.421011 waagent[1596]: 2024-02-09T19:05:16.420953Z INFO EnvHandler ExtHandler Routes:None Feb 9 19:05:16.422611 waagent[1596]: 2024-02-09T19:05:16.422558Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 9 19:05:16.447868 waagent[1596]: 2024-02-09T19:05:16.447727Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 9 19:05:16.449713 waagent[1596]: 2024-02-09T19:05:16.449652Z INFO MonitorHandler ExtHandler Network interfaces: Feb 9 19:05:16.449713 waagent[1596]: Executing ['ip', '-a', '-o', 'link']: Feb 9 19:05:16.449713 waagent[1596]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 9 19:05:16.449713 waagent[1596]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:87:d9 brd ff:ff:ff:ff:ff:ff Feb 9 19:05:16.449713 waagent[1596]: 3: enP48596s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:a1:87:d9 brd ff:ff:ff:ff:ff:ff\ altname enP48596p0s2 Feb 9 19:05:16.449713 waagent[1596]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 9 19:05:16.449713 waagent[1596]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 9 19:05:16.449713 waagent[1596]: 2: eth0 inet 10.200.8.19/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 9 19:05:16.449713 waagent[1596]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 9 19:05:16.449713 waagent[1596]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 9 19:05:16.449713 waagent[1596]: 2: eth0 inet6 fe80::222:48ff:fea1:87d9/64 scope link \ valid_lft forever preferred_lft forever Feb 9 19:05:16.451148 waagent[1596]: 2024-02-09T19:05:16.451082Z INFO ExtHandler ExtHandler Downloading manifest Feb 9 19:05:16.505885 waagent[1596]: 2024-02-09T19:05:16.505733Z INFO ExtHandler ExtHandler Feb 9 19:05:16.506050 waagent[1596]: 2024-02-09T19:05:16.505957Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 5cfaea2a-c2fd-46f5-b4e4-7f98f4ba1554 correlation a3cc4702-3ef6-48ca-85a3-5925f480769e created: 2024-02-09T19:04:00.718094Z] Feb 9 19:05:16.507140 waagent[1596]: 2024-02-09T19:05:16.507064Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 9 19:05:16.515924 waagent[1596]: 2024-02-09T19:05:16.515853Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 10 ms] Feb 9 19:05:16.548356 waagent[1596]: 2024-02-09T19:05:16.548272Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 9 19:05:16.548356 waagent[1596]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:16.548356 waagent[1596]: pkts bytes target prot opt in out source destination Feb 9 19:05:16.548356 waagent[1596]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:16.548356 waagent[1596]: pkts bytes target prot opt in out source destination Feb 9 19:05:16.548356 waagent[1596]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Feb 9 19:05:16.548356 waagent[1596]: pkts bytes target prot opt in out source destination Feb 9 19:05:16.548356 waagent[1596]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 9 19:05:16.548356 waagent[1596]: 175 22283 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 9 19:05:16.548356 waagent[1596]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 9 19:05:16.551061 waagent[1596]: 2024-02-09T19:05:16.550997Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 9 19:05:16.562060 waagent[1596]: 2024-02-09T19:05:16.561985Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 03A03CEF-4B51-4CB5-A730-3EA66389EFDF;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 9 19:05:22.893653 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 9 19:05:24.422850 update_engine[1377]: I0209 19:05:24.422745 1377 update_attempter.cc:509] Updating boot flags... Feb 9 19:05:37.785727 systemd[1]: Created slice system-sshd.slice. Feb 9 19:05:37.787767 systemd[1]: Started sshd@0-10.200.8.19:22-10.200.12.6:59096.service. Feb 9 19:05:38.463680 sshd[1678]: Accepted publickey for core from 10.200.12.6 port 59096 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:38.465389 sshd[1678]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:38.469245 systemd-logind[1375]: New session 3 of user core. Feb 9 19:05:38.470500 systemd[1]: Started session-3.scope. Feb 9 19:05:39.002956 systemd[1]: Started sshd@1-10.200.8.19:22-10.200.12.6:59098.service. Feb 9 19:05:39.629159 sshd[1683]: Accepted publickey for core from 10.200.12.6 port 59098 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:39.630913 sshd[1683]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:39.636956 systemd[1]: Started session-4.scope. Feb 9 19:05:39.637267 systemd-logind[1375]: New session 4 of user core. Feb 9 19:05:40.072431 sshd[1683]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:40.076104 systemd[1]: sshd@1-10.200.8.19:22-10.200.12.6:59098.service: Deactivated successfully. Feb 9 19:05:40.077610 systemd-logind[1375]: Session 4 logged out. Waiting for processes to exit. Feb 9 19:05:40.077737 systemd[1]: session-4.scope: Deactivated successfully. Feb 9 19:05:40.079270 systemd-logind[1375]: Removed session 4. Feb 9 19:05:40.176489 systemd[1]: Started sshd@2-10.200.8.19:22-10.200.12.6:59106.service. Feb 9 19:05:40.792909 sshd[1690]: Accepted publickey for core from 10.200.12.6 port 59106 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:40.794568 sshd[1690]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:40.799629 systemd[1]: Started session-5.scope. Feb 9 19:05:40.799930 systemd-logind[1375]: New session 5 of user core. Feb 9 19:05:41.226019 sshd[1690]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:41.229514 systemd[1]: sshd@2-10.200.8.19:22-10.200.12.6:59106.service: Deactivated successfully. Feb 9 19:05:41.231018 systemd-logind[1375]: Session 5 logged out. Waiting for processes to exit. Feb 9 19:05:41.231150 systemd[1]: session-5.scope: Deactivated successfully. Feb 9 19:05:41.233142 systemd-logind[1375]: Removed session 5. Feb 9 19:05:41.330160 systemd[1]: Started sshd@3-10.200.8.19:22-10.200.12.6:59114.service. Feb 9 19:05:41.952946 sshd[1697]: Accepted publickey for core from 10.200.12.6 port 59114 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:41.954650 sshd[1697]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:41.960695 systemd[1]: Started session-6.scope. Feb 9 19:05:41.960971 systemd-logind[1375]: New session 6 of user core. Feb 9 19:05:42.392818 sshd[1697]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:42.396358 systemd[1]: sshd@3-10.200.8.19:22-10.200.12.6:59114.service: Deactivated successfully. Feb 9 19:05:42.398899 systemd[1]: session-6.scope: Deactivated successfully. Feb 9 19:05:42.399992 systemd-logind[1375]: Session 6 logged out. Waiting for processes to exit. Feb 9 19:05:42.401240 systemd-logind[1375]: Removed session 6. Feb 9 19:05:42.495916 systemd[1]: Started sshd@4-10.200.8.19:22-10.200.12.6:59120.service. Feb 9 19:05:43.118627 sshd[1704]: Accepted publickey for core from 10.200.12.6 port 59120 ssh2: RSA SHA256:YgPjskTM3AVkF+tJg78MDL6wtNgbOF8Unod2GMujbXg Feb 9 19:05:43.120334 sshd[1704]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 9 19:05:43.125867 systemd[1]: Started session-7.scope. Feb 9 19:05:43.126110 systemd-logind[1375]: New session 7 of user core. Feb 9 19:05:43.513009 sudo[1708]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 9 19:05:43.513276 sudo[1708]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 9 19:05:44.085576 systemd[1]: Reloading. Feb 9 19:05:44.167551 /usr/lib/systemd/system-generators/torcx-generator[1738]: time="2024-02-09T19:05:44Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:05:44.167586 /usr/lib/systemd/system-generators/torcx-generator[1738]: time="2024-02-09T19:05:44Z" level=info msg="torcx already run" Feb 9 19:05:44.261462 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:05:44.261484 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:05:44.279757 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:05:44.359334 systemd[1]: Started kubelet.service. Feb 9 19:05:44.379629 systemd[1]: Starting coreos-metadata.service... Feb 9 19:05:44.443972 coreos-metadata[1818]: Feb 09 19:05:44.443 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 9 19:05:44.448972 kubelet[1805]: E0209 19:05:44.448845 1805 run.go:74] "command failed" err="failed to validate kubelet flags: the container runtime endpoint address was not specified or empty, use --container-runtime-endpoint to set" Feb 9 19:05:44.449931 coreos-metadata[1818]: Feb 09 19:05:44.449 INFO Fetch successful Feb 9 19:05:44.450165 coreos-metadata[1818]: Feb 09 19:05:44.450 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 9 19:05:44.451039 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 9 19:05:44.451204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 9 19:05:44.452718 coreos-metadata[1818]: Feb 09 19:05:44.452 INFO Fetch successful Feb 9 19:05:44.452805 coreos-metadata[1818]: Feb 09 19:05:44.452 INFO Fetching http://168.63.129.16/machine/a0507b52-88cd-4a59-96c8-e68c8b98d02a/4b63d362%2D1394%2D4072%2Db19f%2D11686f123d8a.%5Fci%2D3510.3.2%2Da%2D92fe98b439?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 9 19:05:44.454185 coreos-metadata[1818]: Feb 09 19:05:44.454 INFO Fetch successful Feb 9 19:05:44.488649 coreos-metadata[1818]: Feb 09 19:05:44.488 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 9 19:05:44.503598 coreos-metadata[1818]: Feb 09 19:05:44.503 INFO Fetch successful Feb 9 19:05:44.515646 systemd[1]: Finished coreos-metadata.service. Feb 9 19:05:45.511215 systemd[1]: Stopped kubelet.service. Feb 9 19:05:45.524762 systemd[1]: Reloading. Feb 9 19:05:45.602473 /usr/lib/systemd/system-generators/torcx-generator[1879]: time="2024-02-09T19:05:45Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 9 19:05:45.604618 /usr/lib/systemd/system-generators/torcx-generator[1879]: time="2024-02-09T19:05:45Z" level=info msg="torcx already run" Feb 9 19:05:45.697257 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 9 19:05:45.697276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 9 19:05:45.715150 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 9 19:05:45.801316 systemd[1]: Started kubelet.service. Feb 9 19:05:45.853747 kubelet[1947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:05:45.853747 kubelet[1947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:05:45.854278 kubelet[1947]: I0209 19:05:45.853809 1947 server.go:198] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 9 19:05:45.855252 kubelet[1947]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.27. Image garbage collector will get sandbox image information from CRI. Feb 9 19:05:45.855252 kubelet[1947]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 9 19:05:46.390940 kubelet[1947]: I0209 19:05:46.390904 1947 server.go:412] "Kubelet version" kubeletVersion="v1.26.5" Feb 9 19:05:46.390940 kubelet[1947]: I0209 19:05:46.390930 1947 server.go:414] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 9 19:05:46.391248 kubelet[1947]: I0209 19:05:46.391227 1947 server.go:836] "Client rotation is on, will bootstrap in background" Feb 9 19:05:46.393404 kubelet[1947]: I0209 19:05:46.393379 1947 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 9 19:05:46.396479 kubelet[1947]: I0209 19:05:46.396453 1947 server.go:659] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 9 19:05:46.396860 kubelet[1947]: I0209 19:05:46.396841 1947 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 9 19:05:46.396939 kubelet[1947]: I0209 19:05:46.396931 1947 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: KubeletOOMScoreAdj:-999 ContainerRuntime: CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:imagefs.available Operator:LessThan Value:{Quantity: Percentage:0.15} GracePeriod:0s MinReclaim:} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:} {Signal:nodefs.available Operator:LessThan Value:{Quantity: Percentage:0.1} GracePeriod:0s MinReclaim:} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity: Percentage:0.05} GracePeriod:0s MinReclaim:}]} QOSReserved:map[] CPUManagerPolicy:none CPUManagerPolicyOptions:map[] ExperimentalTopologyManagerScope:container CPUManagerReconcilePeriod:10s ExperimentalMemoryManagerPolicy:None ExperimentalMemoryManagerReservedMemory:[] ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none ExperimentalTopologyManagerPolicyOptions:map[]} Feb 9 19:05:46.397074 kubelet[1947]: I0209 19:05:46.396958 1947 topology_manager.go:134] "Creating topology manager with policy per scope" topologyPolicyName="none" topologyScopeName="container" Feb 9 19:05:46.397074 kubelet[1947]: I0209 19:05:46.396973 1947 container_manager_linux.go:308] "Creating device plugin manager" Feb 9 19:05:46.397157 kubelet[1947]: I0209 19:05:46.397076 1947 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:05:46.400154 kubelet[1947]: I0209 19:05:46.400137 1947 kubelet.go:398] "Attempting to sync node with API server" Feb 9 19:05:46.400274 kubelet[1947]: I0209 19:05:46.400264 1947 kubelet.go:286] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 9 19:05:46.400366 kubelet[1947]: I0209 19:05:46.400356 1947 kubelet.go:297] "Adding apiserver pod source" Feb 9 19:05:46.400504 kubelet[1947]: I0209 19:05:46.400492 1947 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 9 19:05:46.400890 kubelet[1947]: E0209 19:05:46.400873 1947 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:46.400969 kubelet[1947]: E0209 19:05:46.400930 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:46.401298 kubelet[1947]: I0209 19:05:46.401269 1947 kuberuntime_manager.go:244] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 9 19:05:46.401615 kubelet[1947]: W0209 19:05:46.401600 1947 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 9 19:05:46.402076 kubelet[1947]: I0209 19:05:46.402054 1947 server.go:1186] "Started kubelet" Feb 9 19:05:46.402334 kubelet[1947]: I0209 19:05:46.402321 1947 server.go:161] "Starting to listen" address="0.0.0.0" port=10250 Feb 9 19:05:46.403578 kubelet[1947]: I0209 19:05:46.403564 1947 server.go:451] "Adding debug handlers to kubelet server" Feb 9 19:05:46.405140 kubelet[1947]: E0209 19:05:46.405125 1947 cri_stats_provider.go:455] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 9 19:05:46.405237 kubelet[1947]: E0209 19:05:46.405229 1947 kubelet.go:1386] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 9 19:05:46.409139 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 9 19:05:46.409526 kubelet[1947]: I0209 19:05:46.409509 1947 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 9 19:05:46.414010 kubelet[1947]: E0209 19:05:46.413995 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:46.414141 kubelet[1947]: I0209 19:05:46.414130 1947 volume_manager.go:293] "Starting Kubelet Volume Manager" Feb 9 19:05:46.414283 kubelet[1947]: I0209 19:05:46.414270 1947 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 9 19:05:46.414869 kubelet[1947]: W0209 19:05:46.414851 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:46.415007 kubelet[1947]: E0209 19:05:46.414994 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:46.415239 kubelet[1947]: E0209 19:05:46.415116 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f694d5814", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 402027540, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 402027540, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.416301 kubelet[1947]: W0209 19:05:46.416275 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:46.419257 kubelet[1947]: E0209 19:05:46.419239 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:46.419362 kubelet[1947]: W0209 19:05:46.417428 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:46.419442 kubelet[1947]: E0209 19:05:46.419431 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:46.419519 kubelet[1947]: E0209 19:05:46.418080 1947 controller.go:146] failed to ensure lease exists, will retry in 200ms, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:46.420925 kubelet[1947]: E0209 19:05:46.420841 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f697e0ab8", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 405219000, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 405219000, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.460892 kubelet[1947]: I0209 19:05:46.460856 1947 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 9 19:05:46.460892 kubelet[1947]: I0209 19:05:46.460885 1947 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 9 19:05:46.461076 kubelet[1947]: I0209 19:05:46.460903 1947 state_mem.go:36] "Initialized new in-memory state store" Feb 9 19:05:46.465097 kubelet[1947]: I0209 19:05:46.465072 1947 policy_none.go:49] "None policy: Start" Feb 9 19:05:46.465449 kubelet[1947]: E0209 19:05:46.465205 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.465988 kubelet[1947]: I0209 19:05:46.465969 1947 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 9 19:05:46.466060 kubelet[1947]: I0209 19:05:46.466002 1947 state_mem.go:35] "Initializing new in-memory state store" Feb 9 19:05:46.472858 kubelet[1947]: E0209 19:05:46.472805 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.475918 kubelet[1947]: I0209 19:05:46.475902 1947 manager.go:455] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 9 19:05:46.476216 kubelet[1947]: I0209 19:05:46.476203 1947 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 9 19:05:46.481557 kubelet[1947]: E0209 19:05:46.481359 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.481715 kubelet[1947]: E0209 19:05:46.481699 1947 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.19\" not found" Feb 9 19:05:46.482204 kubelet[1947]: E0209 19:05:46.482141 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6dd3ffcd", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 477961165, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 477961165, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.515006 kubelet[1947]: I0209 19:05:46.514984 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:46.516681 kubelet[1947]: E0209 19:05:46.516664 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:46.516926 kubelet[1947]: E0209 19:05:46.516874 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 514938358, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.517996 kubelet[1947]: E0209 19:05:46.517931 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 514949458, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.519177 kubelet[1947]: E0209 19:05:46.519119 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 514953058, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.527278 kubelet[1947]: I0209 19:05:46.527250 1947 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv4 Feb 9 19:05:46.551501 kubelet[1947]: I0209 19:05:46.551472 1947 kubelet_network_linux.go:63] "Initialized iptables rules." protocol=IPv6 Feb 9 19:05:46.551501 kubelet[1947]: I0209 19:05:46.551496 1947 status_manager.go:176] "Starting to sync pod status with apiserver" Feb 9 19:05:46.551501 kubelet[1947]: I0209 19:05:46.551521 1947 kubelet.go:2113] "Starting kubelet main sync loop" Feb 9 19:05:46.551758 kubelet[1947]: E0209 19:05:46.551582 1947 kubelet.go:2137] "Skipping pod synchronization" err="PLEG is not healthy: pleg has yet to be successful" Feb 9 19:05:46.553797 kubelet[1947]: W0209 19:05:46.553756 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:46.553964 kubelet[1947]: E0209 19:05:46.553951 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:46.621509 kubelet[1947]: E0209 19:05:46.621463 1947 controller.go:146] failed to ensure lease exists, will retry in 400ms, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:46.718158 kubelet[1947]: I0209 19:05:46.718022 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:46.719846 kubelet[1947]: E0209 19:05:46.719818 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:46.720137 kubelet[1947]: E0209 19:05:46.720033 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 717970866, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.721309 kubelet[1947]: E0209 19:05:46.721236 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 717984466, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:46.804227 kubelet[1947]: E0209 19:05:46.804036 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 717990066, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.023679 kubelet[1947]: E0209 19:05:47.023551 1947 controller.go:146] failed to ensure lease exists, will retry in 800ms, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:47.121184 kubelet[1947]: I0209 19:05:47.121137 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:47.122673 kubelet[1947]: E0209 19:05:47.122637 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:47.122823 kubelet[1947]: E0209 19:05:47.122649 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 121083467, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.204632 kubelet[1947]: E0209 19:05:47.204522 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 121096867, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.276590 kubelet[1947]: W0209 19:05:47.276462 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:47.276590 kubelet[1947]: E0209 19:05:47.276503 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:47.335135 kubelet[1947]: W0209 19:05:47.335093 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:47.335135 kubelet[1947]: E0209 19:05:47.335137 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:47.401706 kubelet[1947]: E0209 19:05:47.401641 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:47.404102 kubelet[1947]: E0209 19:05:47.404000 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 121103367, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.825149 kubelet[1947]: E0209 19:05:47.825104 1947 controller.go:146] failed to ensure lease exists, will retry in 1.6s, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:47.923629 kubelet[1947]: I0209 19:05:47.923579 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:47.925140 kubelet[1947]: E0209 19:05:47.925110 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:47.925301 kubelet[1947]: E0209 19:05:47.925101 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 923514205, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.926235 kubelet[1947]: E0209 19:05:47.926161 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 923520905, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:47.968756 kubelet[1947]: W0209 19:05:47.968715 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:47.968756 kubelet[1947]: E0209 19:05:47.968756 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:47.989036 kubelet[1947]: W0209 19:05:47.988999 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:47.989036 kubelet[1947]: E0209 19:05:47.989038 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:48.004077 kubelet[1947]: E0209 19:05:48.003989 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 47, 923548306, time.Local), Count:5, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:48.402584 kubelet[1947]: E0209 19:05:48.402527 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:49.297080 kubelet[1947]: W0209 19:05:49.297035 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:49.297080 kubelet[1947]: E0209 19:05:49.297080 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:49.403437 kubelet[1947]: E0209 19:05:49.403360 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:49.427060 kubelet[1947]: E0209 19:05:49.427018 1947 controller.go:146] failed to ensure lease exists, will retry in 3.2s, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:49.526843 kubelet[1947]: I0209 19:05:49.526809 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:49.528125 kubelet[1947]: E0209 19:05:49.528091 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:49.528336 kubelet[1947]: E0209 19:05:49.528084 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 49, 526728446, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:49.529242 kubelet[1947]: E0209 19:05:49.529175 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 49, 526750146, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:49.530247 kubelet[1947]: E0209 19:05:49.530097 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 49, 526755446, time.Local), Count:6, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:50.101523 kubelet[1947]: W0209 19:05:50.101477 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:50.101523 kubelet[1947]: E0209 19:05:50.101526 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:50.230791 kubelet[1947]: W0209 19:05:50.230730 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:50.230791 kubelet[1947]: E0209 19:05:50.230787 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:50.263224 kubelet[1947]: W0209 19:05:50.263178 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:50.263224 kubelet[1947]: E0209 19:05:50.263225 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:50.404340 kubelet[1947]: E0209 19:05:50.404276 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:51.404855 kubelet[1947]: E0209 19:05:51.404790 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:52.405106 kubelet[1947]: E0209 19:05:52.405042 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:52.628794 kubelet[1947]: E0209 19:05:52.628747 1947 controller.go:146] failed to ensure lease exists, will retry in 6.4s, error: leases.coordination.k8s.io "10.200.8.19" is forbidden: User "system:anonymous" cannot get resource "leases" in API group "coordination.k8s.io" in the namespace "kube-node-lease" Feb 9 19:05:52.730532 kubelet[1947]: I0209 19:05:52.730209 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:52.731384 kubelet[1947]: E0209 19:05:52.731358 1947 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.19" Feb 9 19:05:52.731750 kubelet[1947]: E0209 19:05:52.731660 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc487f7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.19 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460170231, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 52, 730127279, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc487f7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:52.732566 kubelet[1947]: E0209 19:05:52.732491 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc49dd7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.19 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460175831, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 52, 730139680, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc49dd7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:52.733382 kubelet[1947]: E0209 19:05:52.733311 1947 event.go:267] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.19.17b2473f6cc4abe7", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.19", UID:"10.200.8.19", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.19 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.19"}, FirstTimestamp:time.Date(2024, time.February, 9, 19, 5, 46, 460179431, time.Local), LastTimestamp:time.Date(2024, time.February, 9, 19, 5, 52, 730171680, time.Local), Count:7, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"", ReportingInstance:""}': 'events "10.200.8.19.17b2473f6cc4abe7" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 9 19:05:53.035716 kubelet[1947]: W0209 19:05:53.035583 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:53.035716 kubelet[1947]: E0209 19:05:53.035628 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 9 19:05:53.405951 kubelet[1947]: E0209 19:05:53.405880 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:54.379220 kubelet[1947]: W0209 19:05:54.379178 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:54.379220 kubelet[1947]: E0209 19:05:54.379223 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 9 19:05:54.406641 kubelet[1947]: E0209 19:05:54.406585 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:54.985470 kubelet[1947]: W0209 19:05:54.985420 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:54.985470 kubelet[1947]: E0209 19:05:54.985468 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 9 19:05:55.269376 kubelet[1947]: W0209 19:05:55.269249 1947 reflector.go:424] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:55.269376 kubelet[1947]: E0209 19:05:55.269290 1947 reflector.go:140] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.19" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 9 19:05:55.407309 kubelet[1947]: E0209 19:05:55.407246 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:56.393516 kubelet[1947]: I0209 19:05:56.393452 1947 transport.go:135] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 9 19:05:56.407941 kubelet[1947]: E0209 19:05:56.407821 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:56.482054 kubelet[1947]: E0209 19:05:56.482020 1947 eviction_manager.go:261] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.19\" not found" Feb 9 19:05:56.757725 kubelet[1947]: E0209 19:05:56.757594 1947 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.19" not found Feb 9 19:05:57.408060 kubelet[1947]: E0209 19:05:57.407995 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:57.819255 kubelet[1947]: E0209 19:05:57.819122 1947 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.19" not found Feb 9 19:05:58.408333 kubelet[1947]: E0209 19:05:58.408240 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:59.033212 kubelet[1947]: E0209 19:05:59.033167 1947 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.19\" not found" node="10.200.8.19" Feb 9 19:05:59.132850 kubelet[1947]: I0209 19:05:59.132813 1947 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.19" Feb 9 19:05:59.220174 kubelet[1947]: I0209 19:05:59.220132 1947 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.19" Feb 9 19:05:59.238565 kubelet[1947]: E0209 19:05:59.238535 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.319712 sudo[1708]: pam_unix(sudo:session): session closed for user root Feb 9 19:05:59.338813 kubelet[1947]: E0209 19:05:59.338725 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.409466 kubelet[1947]: E0209 19:05:59.409414 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:05:59.423216 sshd[1704]: pam_unix(sshd:session): session closed for user core Feb 9 19:05:59.426380 systemd[1]: sshd@4-10.200.8.19:22-10.200.12.6:59120.service: Deactivated successfully. Feb 9 19:05:59.427562 systemd[1]: session-7.scope: Deactivated successfully. Feb 9 19:05:59.429480 systemd-logind[1375]: Session 7 logged out. Waiting for processes to exit. Feb 9 19:05:59.431074 systemd-logind[1375]: Removed session 7. Feb 9 19:05:59.439458 kubelet[1947]: E0209 19:05:59.439390 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.540592 kubelet[1947]: E0209 19:05:59.540528 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.641548 kubelet[1947]: E0209 19:05:59.641500 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.742203 kubelet[1947]: E0209 19:05:59.742139 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.842931 kubelet[1947]: E0209 19:05:59.842882 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:05:59.944148 kubelet[1947]: E0209 19:05:59.944003 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.044712 kubelet[1947]: E0209 19:06:00.044655 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.145499 kubelet[1947]: E0209 19:06:00.145440 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.246362 kubelet[1947]: E0209 19:06:00.246215 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.347124 kubelet[1947]: E0209 19:06:00.347030 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.410230 kubelet[1947]: E0209 19:06:00.410159 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:00.447547 kubelet[1947]: E0209 19:06:00.447484 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.548806 kubelet[1947]: E0209 19:06:00.548557 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.649581 kubelet[1947]: E0209 19:06:00.649525 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.750340 kubelet[1947]: E0209 19:06:00.750282 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.851331 kubelet[1947]: E0209 19:06:00.851173 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:00.952193 kubelet[1947]: E0209 19:06:00.952134 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.053067 kubelet[1947]: E0209 19:06:01.053008 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.153744 kubelet[1947]: E0209 19:06:01.153684 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.254560 kubelet[1947]: E0209 19:06:01.254504 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.355572 kubelet[1947]: E0209 19:06:01.355511 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.411347 kubelet[1947]: E0209 19:06:01.411201 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:01.456693 kubelet[1947]: E0209 19:06:01.456633 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.556839 kubelet[1947]: E0209 19:06:01.556767 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.657940 kubelet[1947]: E0209 19:06:01.657879 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.758836 kubelet[1947]: E0209 19:06:01.758683 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.859414 kubelet[1947]: E0209 19:06:01.859360 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:01.960382 kubelet[1947]: E0209 19:06:01.960323 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.061127 kubelet[1947]: E0209 19:06:02.060982 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.161769 kubelet[1947]: E0209 19:06:02.161715 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.262459 kubelet[1947]: E0209 19:06:02.262404 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.363018 kubelet[1947]: E0209 19:06:02.362976 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.411768 kubelet[1947]: E0209 19:06:02.411701 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:02.463190 kubelet[1947]: E0209 19:06:02.463133 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.564331 kubelet[1947]: E0209 19:06:02.564278 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.665231 kubelet[1947]: E0209 19:06:02.665097 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.765661 kubelet[1947]: E0209 19:06:02.765601 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.866761 kubelet[1947]: E0209 19:06:02.866705 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:02.967732 kubelet[1947]: E0209 19:06:02.967586 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.068292 kubelet[1947]: E0209 19:06:03.068236 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.169007 kubelet[1947]: E0209 19:06:03.168945 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.269899 kubelet[1947]: E0209 19:06:03.269746 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.370502 kubelet[1947]: E0209 19:06:03.370444 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.412218 kubelet[1947]: E0209 19:06:03.412160 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:03.470648 kubelet[1947]: E0209 19:06:03.470589 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.571078 kubelet[1947]: E0209 19:06:03.570939 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.671402 kubelet[1947]: E0209 19:06:03.671314 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.772233 kubelet[1947]: E0209 19:06:03.772120 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.873086 kubelet[1947]: E0209 19:06:03.873004 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:03.974281 kubelet[1947]: E0209 19:06:03.974193 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:04.075009 kubelet[1947]: E0209 19:06:04.074953 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:04.175840 kubelet[1947]: E0209 19:06:04.175678 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:04.276422 kubelet[1947]: E0209 19:06:04.276362 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:04.377050 kubelet[1947]: E0209 19:06:04.376993 1947 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.19\" not found" Feb 9 19:06:04.412436 kubelet[1947]: E0209 19:06:04.412368 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:04.478174 kubelet[1947]: I0209 19:06:04.477842 1947 kuberuntime_manager.go:1114] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 9 19:06:04.479130 env[1392]: time="2024-02-09T19:06:04.479077176Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 9 19:06:04.479679 kubelet[1947]: I0209 19:06:04.479344 1947 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 9 19:06:05.411501 kubelet[1947]: I0209 19:06:05.411445 1947 apiserver.go:52] "Watching apiserver" Feb 9 19:06:05.412568 kubelet[1947]: E0209 19:06:05.412540 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:05.414120 kubelet[1947]: I0209 19:06:05.414083 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:05.414264 kubelet[1947]: I0209 19:06:05.414195 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:05.415193 kubelet[1947]: I0209 19:06:05.415165 1947 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 9 19:06:05.433854 kubelet[1947]: I0209 19:06:05.433824 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-run\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.434603 kubelet[1947]: I0209 19:06:05.434578 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hostproc\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.434790 kubelet[1947]: I0209 19:06:05.434758 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-cgroup\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.434931 kubelet[1947]: I0209 19:06:05.434918 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hubble-tls\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435029 kubelet[1947]: I0209 19:06:05.435020 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/a70287ea-0717-487d-b6b4-c27fbdf33720-kube-proxy\") pod \"kube-proxy-w2q6p\" (UID: \"a70287ea-0717-487d-b6b4-c27fbdf33720\") " pod="kube-system/kube-proxy-w2q6p" Feb 9 19:06:05.435120 kubelet[1947]: I0209 19:06:05.435113 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a70287ea-0717-487d-b6b4-c27fbdf33720-lib-modules\") pod \"kube-proxy-w2q6p\" (UID: \"a70287ea-0717-487d-b6b4-c27fbdf33720\") " pod="kube-system/kube-proxy-w2q6p" Feb 9 19:06:05.435213 kubelet[1947]: I0209 19:06:05.435203 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-etc-cni-netd\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435303 kubelet[1947]: I0209 19:06:05.435295 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-clustermesh-secrets\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435400 kubelet[1947]: I0209 19:06:05.435391 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-net\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435489 kubelet[1947]: I0209 19:06:05.435478 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cni-path\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435578 kubelet[1947]: I0209 19:06:05.435570 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-lib-modules\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435662 kubelet[1947]: I0209 19:06:05.435654 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-config-path\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435746 kubelet[1947]: I0209 19:06:05.435738 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hr8pq\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-kube-api-access-hr8pq\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.435867 kubelet[1947]: I0209 19:06:05.435857 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/a70287ea-0717-487d-b6b4-c27fbdf33720-xtables-lock\") pod \"kube-proxy-w2q6p\" (UID: \"a70287ea-0717-487d-b6b4-c27fbdf33720\") " pod="kube-system/kube-proxy-w2q6p" Feb 9 19:06:05.435966 kubelet[1947]: I0209 19:06:05.435957 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-bpf-maps\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.436051 kubelet[1947]: I0209 19:06:05.436044 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-xtables-lock\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.436138 kubelet[1947]: I0209 19:06:05.436131 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-kernel\") pod \"cilium-fg625\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " pod="kube-system/cilium-fg625" Feb 9 19:06:05.436246 kubelet[1947]: I0209 19:06:05.436233 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q269t\" (UniqueName: \"kubernetes.io/projected/a70287ea-0717-487d-b6b4-c27fbdf33720-kube-api-access-q269t\") pod \"kube-proxy-w2q6p\" (UID: \"a70287ea-0717-487d-b6b4-c27fbdf33720\") " pod="kube-system/kube-proxy-w2q6p" Feb 9 19:06:05.436333 kubelet[1947]: I0209 19:06:05.436324 1947 reconciler.go:41] "Reconciler: start to sync state" Feb 9 19:06:05.721519 env[1392]: time="2024-02-09T19:06:05.721364487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2q6p,Uid:a70287ea-0717-487d-b6b4-c27fbdf33720,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:06.022527 env[1392]: time="2024-02-09T19:06:06.022304686Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fg625,Uid:dc723f4c-6ab0-4f7a-a736-256c6ddc662a,Namespace:kube-system,Attempt:0,}" Feb 9 19:06:06.400883 kubelet[1947]: E0209 19:06:06.400756 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:06.413056 kubelet[1947]: E0209 19:06:06.412999 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:06.457515 env[1392]: time="2024-02-09T19:06:06.457474188Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.465840 env[1392]: time="2024-02-09T19:06:06.465806280Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.469296 env[1392]: time="2024-02-09T19:06:06.469260918Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.476570 env[1392]: time="2024-02-09T19:06:06.476534399Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.480226 env[1392]: time="2024-02-09T19:06:06.480191939Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.483099 env[1392]: time="2024-02-09T19:06:06.483069971Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.488851 env[1392]: time="2024-02-09T19:06:06.488826434Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.502299 env[1392]: time="2024-02-09T19:06:06.502264682Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:06.547193 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3633186927.mount: Deactivated successfully. Feb 9 19:06:06.564253 env[1392]: time="2024-02-09T19:06:06.564187166Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:06.564253 env[1392]: time="2024-02-09T19:06:06.564227366Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:06.564557 env[1392]: time="2024-02-09T19:06:06.564241466Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:06.564557 env[1392]: time="2024-02-09T19:06:06.564387768Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/cfbd03040f48f36bf7d9f8fbfb19b46fd1a631304012a9a787fd275eabe92349 pid=2034 runtime=io.containerd.runc.v2 Feb 9 19:06:06.600152 env[1392]: time="2024-02-09T19:06:06.598007739Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:06.600152 env[1392]: time="2024-02-09T19:06:06.598327743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:06.600152 env[1392]: time="2024-02-09T19:06:06.598398643Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:06.600152 env[1392]: time="2024-02-09T19:06:06.598650846Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072 pid=2060 runtime=io.containerd.runc.v2 Feb 9 19:06:06.624259 env[1392]: time="2024-02-09T19:06:06.624212028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-w2q6p,Uid:a70287ea-0717-487d-b6b4-c27fbdf33720,Namespace:kube-system,Attempt:0,} returns sandbox id \"cfbd03040f48f36bf7d9f8fbfb19b46fd1a631304012a9a787fd275eabe92349\"" Feb 9 19:06:06.626530 env[1392]: time="2024-02-09T19:06:06.626489753Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\"" Feb 9 19:06:06.657985 env[1392]: time="2024-02-09T19:06:06.657880500Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-fg625,Uid:dc723f4c-6ab0-4f7a-a736-256c6ddc662a,Namespace:kube-system,Attempt:0,} returns sandbox id \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\"" Feb 9 19:06:07.413847 kubelet[1947]: E0209 19:06:07.413801 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:07.717267 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount356326136.mount: Deactivated successfully. Feb 9 19:06:08.260276 env[1392]: time="2024-02-09T19:06:08.260222871Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:08.267428 env[1392]: time="2024-02-09T19:06:08.267381146Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:08.275942 env[1392]: time="2024-02-09T19:06:08.275900836Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.26.13,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:08.280443 env[1392]: time="2024-02-09T19:06:08.280396983Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:f6e0de32a002b910b9b2e0e8d769e2d7b05208240559c745ce4781082ab15f22,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:08.281897 env[1392]: time="2024-02-09T19:06:08.281854198Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.26.13\" returns image reference \"sha256:5a7325fa2b6e8d712e4a770abb4a5a5852e87b6de8df34552d67853e9bfb9f9f\"" Feb 9 19:06:08.284144 env[1392]: time="2024-02-09T19:06:08.284107522Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 9 19:06:08.285957 env[1392]: time="2024-02-09T19:06:08.285924241Z" level=info msg="CreateContainer within sandbox \"cfbd03040f48f36bf7d9f8fbfb19b46fd1a631304012a9a787fd275eabe92349\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 9 19:06:08.374691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1093943794.mount: Deactivated successfully. Feb 9 19:06:08.397081 env[1392]: time="2024-02-09T19:06:08.397030207Z" level=info msg="CreateContainer within sandbox \"cfbd03040f48f36bf7d9f8fbfb19b46fd1a631304012a9a787fd275eabe92349\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d9e8644ce33fd999409208664982b2c27bceaa88e00c39fc14c9eb8b3790dd60\"" Feb 9 19:06:08.397845 env[1392]: time="2024-02-09T19:06:08.397798816Z" level=info msg="StartContainer for \"d9e8644ce33fd999409208664982b2c27bceaa88e00c39fc14c9eb8b3790dd60\"" Feb 9 19:06:08.414019 kubelet[1947]: E0209 19:06:08.413962 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:08.456744 env[1392]: time="2024-02-09T19:06:08.456608033Z" level=info msg="StartContainer for \"d9e8644ce33fd999409208664982b2c27bceaa88e00c39fc14c9eb8b3790dd60\" returns successfully" Feb 9 19:06:08.548850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2037910137.mount: Deactivated successfully. Feb 9 19:06:09.414913 kubelet[1947]: E0209 19:06:09.414848 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:10.415807 kubelet[1947]: E0209 19:06:10.415704 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:11.416303 kubelet[1947]: E0209 19:06:11.416238 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:12.417391 kubelet[1947]: E0209 19:06:12.417326 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:13.417990 kubelet[1947]: E0209 19:06:13.417922 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:13.953006 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3512884743.mount: Deactivated successfully. Feb 9 19:06:14.418761 kubelet[1947]: E0209 19:06:14.418718 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:15.419648 kubelet[1947]: E0209 19:06:15.419581 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:16.420189 kubelet[1947]: E0209 19:06:16.420117 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:16.667443 env[1392]: time="2024-02-09T19:06:16.667379416Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:16.676286 env[1392]: time="2024-02-09T19:06:16.676186293Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:16.684075 env[1392]: time="2024-02-09T19:06:16.684032161Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:16.684674 env[1392]: time="2024-02-09T19:06:16.684640766Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 9 19:06:16.686586 env[1392]: time="2024-02-09T19:06:16.686549882Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:06:16.710438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2938894452.mount: Deactivated successfully. Feb 9 19:06:16.743537 env[1392]: time="2024-02-09T19:06:16.743480576Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\"" Feb 9 19:06:16.744152 env[1392]: time="2024-02-09T19:06:16.744121581Z" level=info msg="StartContainer for \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\"" Feb 9 19:06:16.776671 systemd[1]: run-containerd-runc-k8s.io-4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0-runc.dWfuVF.mount: Deactivated successfully. Feb 9 19:06:16.814246 env[1392]: time="2024-02-09T19:06:16.814193489Z" level=info msg="StartContainer for \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\" returns successfully" Feb 9 19:06:17.420391 kubelet[1947]: E0209 19:06:17.420341 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:17.618451 kubelet[1947]: I0209 19:06:17.618411 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-w2q6p" podStartSLOduration=-9.223372018236404e+09 pod.CreationTimestamp="2024-02-09 19:05:59 +0000 UTC" firstStartedPulling="2024-02-09 19:06:06.625880047 +0000 UTC m=+20.819261478" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:08.600918448 +0000 UTC m=+22.794299879" watchObservedRunningTime="2024-02-09 19:06:17.618370937 +0000 UTC m=+31.811752268" Feb 9 19:06:17.708112 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0-rootfs.mount: Deactivated successfully. Feb 9 19:06:18.421367 kubelet[1947]: E0209 19:06:18.421313 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:19.421820 kubelet[1947]: E0209 19:06:19.421688 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:20.422778 kubelet[1947]: E0209 19:06:20.422713 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:21.412891 env[1392]: time="2024-02-09T19:06:21.412809150Z" level=info msg="shim disconnected" id=4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0 Feb 9 19:06:21.412891 env[1392]: time="2024-02-09T19:06:21.412893450Z" level=warning msg="cleaning up after shim disconnected" id=4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0 namespace=k8s.io Feb 9 19:06:21.413570 env[1392]: time="2024-02-09T19:06:21.412909450Z" level=info msg="cleaning up dead shim" Feb 9 19:06:21.421678 env[1392]: time="2024-02-09T19:06:21.421621018Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2304 runtime=io.containerd.runc.v2\n" Feb 9 19:06:21.423269 kubelet[1947]: E0209 19:06:21.423237 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:21.619570 env[1392]: time="2024-02-09T19:06:21.619522349Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:06:21.654456 env[1392]: time="2024-02-09T19:06:21.654402819Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\"" Feb 9 19:06:21.655042 env[1392]: time="2024-02-09T19:06:21.655008224Z" level=info msg="StartContainer for \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\"" Feb 9 19:06:21.720564 env[1392]: time="2024-02-09T19:06:21.719803825Z" level=info msg="StartContainer for \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\" returns successfully" Feb 9 19:06:21.726135 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 9 19:06:21.726506 systemd[1]: Stopped systemd-sysctl.service. Feb 9 19:06:21.726682 systemd[1]: Stopping systemd-sysctl.service... Feb 9 19:06:21.729461 systemd[1]: Starting systemd-sysctl.service... Feb 9 19:06:21.748433 systemd[1]: Finished systemd-sysctl.service. Feb 9 19:06:21.770614 env[1392]: time="2024-02-09T19:06:21.770558818Z" level=info msg="shim disconnected" id=3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1 Feb 9 19:06:21.770614 env[1392]: time="2024-02-09T19:06:21.770614818Z" level=warning msg="cleaning up after shim disconnected" id=3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1 namespace=k8s.io Feb 9 19:06:21.770941 env[1392]: time="2024-02-09T19:06:21.770627318Z" level=info msg="cleaning up dead shim" Feb 9 19:06:21.778381 env[1392]: time="2024-02-09T19:06:21.778339778Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:21Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2368 runtime=io.containerd.runc.v2\n" Feb 9 19:06:22.423839 kubelet[1947]: E0209 19:06:22.423762 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:22.622387 env[1392]: time="2024-02-09T19:06:22.622335305Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:06:22.640358 systemd[1]: run-containerd-runc-k8s.io-3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1-runc.J2xkBi.mount: Deactivated successfully. Feb 9 19:06:22.640558 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1-rootfs.mount: Deactivated successfully. Feb 9 19:06:22.647849 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount478709321.mount: Deactivated successfully. Feb 9 19:06:22.664936 env[1392]: time="2024-02-09T19:06:22.664894827Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\"" Feb 9 19:06:22.665501 env[1392]: time="2024-02-09T19:06:22.665474032Z" level=info msg="StartContainer for \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\"" Feb 9 19:06:22.718567 env[1392]: time="2024-02-09T19:06:22.718450033Z" level=info msg="StartContainer for \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\" returns successfully" Feb 9 19:06:22.749838 env[1392]: time="2024-02-09T19:06:22.749754770Z" level=info msg="shim disconnected" id=f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f Feb 9 19:06:22.749838 env[1392]: time="2024-02-09T19:06:22.749828070Z" level=warning msg="cleaning up after shim disconnected" id=f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f namespace=k8s.io Feb 9 19:06:22.749838 env[1392]: time="2024-02-09T19:06:22.749841970Z" level=info msg="cleaning up dead shim" Feb 9 19:06:22.758090 env[1392]: time="2024-02-09T19:06:22.758044932Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:22Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2426 runtime=io.containerd.runc.v2\n" Feb 9 19:06:23.424692 kubelet[1947]: E0209 19:06:23.424654 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:23.627805 env[1392]: time="2024-02-09T19:06:23.627744014Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:06:23.656593 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3268738139.mount: Deactivated successfully. Feb 9 19:06:23.674972 env[1392]: time="2024-02-09T19:06:23.674873863Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\"" Feb 9 19:06:23.675638 env[1392]: time="2024-02-09T19:06:23.675594168Z" level=info msg="StartContainer for \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\"" Feb 9 19:06:23.736992 env[1392]: time="2024-02-09T19:06:23.736943622Z" level=info msg="StartContainer for \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\" returns successfully" Feb 9 19:06:23.768573 env[1392]: time="2024-02-09T19:06:23.768516756Z" level=info msg="shim disconnected" id=479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65 Feb 9 19:06:23.768573 env[1392]: time="2024-02-09T19:06:23.768568056Z" level=warning msg="cleaning up after shim disconnected" id=479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65 namespace=k8s.io Feb 9 19:06:23.768573 env[1392]: time="2024-02-09T19:06:23.768580256Z" level=info msg="cleaning up dead shim" Feb 9 19:06:23.776292 env[1392]: time="2024-02-09T19:06:23.776248413Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:06:23Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2482 runtime=io.containerd.runc.v2\n" Feb 9 19:06:24.425727 kubelet[1947]: E0209 19:06:24.425669 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:24.632981 env[1392]: time="2024-02-09T19:06:24.632927960Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:06:24.640429 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65-rootfs.mount: Deactivated successfully. Feb 9 19:06:24.686489 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3788821231.mount: Deactivated successfully. Feb 9 19:06:24.757051 env[1392]: time="2024-02-09T19:06:24.756993459Z" level=info msg="CreateContainer within sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\"" Feb 9 19:06:24.757631 env[1392]: time="2024-02-09T19:06:24.757584563Z" level=info msg="StartContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\"" Feb 9 19:06:24.812718 env[1392]: time="2024-02-09T19:06:24.810701848Z" level=info msg="StartContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" returns successfully" Feb 9 19:06:24.955740 kubelet[1947]: I0209 19:06:24.954551 1947 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 9 19:06:25.208882 kernel: Initializing XFRM netlink socket Feb 9 19:06:25.426524 kubelet[1947]: E0209 19:06:25.426458 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:25.646679 kubelet[1947]: I0209 19:06:25.646641 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-fg625" podStartSLOduration=-9.223372010208172e+09 pod.CreationTimestamp="2024-02-09 19:05:59 +0000 UTC" firstStartedPulling="2024-02-09 19:06:06.659247415 +0000 UTC m=+20.852628746" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:25.646430005 +0000 UTC m=+39.839811336" watchObservedRunningTime="2024-02-09 19:06:25.646603906 +0000 UTC m=+39.839985337" Feb 9 19:06:26.400910 kubelet[1947]: E0209 19:06:26.400858 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:26.427266 kubelet[1947]: E0209 19:06:26.427202 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:26.839931 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 9 19:06:26.840065 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 9 19:06:26.840220 systemd-networkd[1548]: cilium_host: Link UP Feb 9 19:06:26.840906 systemd-networkd[1548]: cilium_net: Link UP Feb 9 19:06:26.841114 systemd-networkd[1548]: cilium_net: Gained carrier Feb 9 19:06:26.842412 systemd-networkd[1548]: cilium_host: Gained carrier Feb 9 19:06:26.954949 systemd-networkd[1548]: cilium_vxlan: Link UP Feb 9 19:06:26.954960 systemd-networkd[1548]: cilium_vxlan: Gained carrier Feb 9 19:06:26.989018 systemd-networkd[1548]: cilium_net: Gained IPv6LL Feb 9 19:06:27.140985 systemd-networkd[1548]: cilium_host: Gained IPv6LL Feb 9 19:06:27.165798 kernel: NET: Registered PF_ALG protocol family Feb 9 19:06:27.427987 kubelet[1947]: E0209 19:06:27.427865 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:27.512048 kubelet[1947]: I0209 19:06:27.511992 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:27.596830 kubelet[1947]: I0209 19:06:27.595743 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bcw96\" (UniqueName: \"kubernetes.io/projected/81f2d2ed-c03d-4cbf-921c-48a8c9e0ec3a-kube-api-access-bcw96\") pod \"nginx-deployment-8ffc5cf85-89ql5\" (UID: \"81f2d2ed-c03d-4cbf-921c-48a8c9e0ec3a\") " pod="default/nginx-deployment-8ffc5cf85-89ql5" Feb 9 19:06:27.796562 systemd-networkd[1548]: lxc_health: Link UP Feb 9 19:06:27.808380 systemd-networkd[1548]: lxc_health: Gained carrier Feb 9 19:06:27.808811 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:06:27.822523 env[1392]: time="2024-02-09T19:06:27.822081349Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-89ql5,Uid:81f2d2ed-c03d-4cbf-921c-48a8c9e0ec3a,Namespace:default,Attempt:0,}" Feb 9 19:06:28.252992 systemd-networkd[1548]: cilium_vxlan: Gained IPv6LL Feb 9 19:06:28.404355 systemd-networkd[1548]: lxcb845095d27f0: Link UP Feb 9 19:06:28.430801 kernel: eth0: renamed from tmp81da7 Feb 9 19:06:28.430941 kubelet[1947]: E0209 19:06:28.429804 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:28.444219 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:06:28.444350 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcb845095d27f0: link becomes ready Feb 9 19:06:28.444718 systemd-networkd[1548]: lxcb845095d27f0: Gained carrier Feb 9 19:06:29.148975 systemd-networkd[1548]: lxc_health: Gained IPv6LL Feb 9 19:06:29.430155 kubelet[1947]: E0209 19:06:29.429997 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:30.109093 systemd-networkd[1548]: lxcb845095d27f0: Gained IPv6LL Feb 9 19:06:30.430906 kubelet[1947]: E0209 19:06:30.430766 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:31.431942 kubelet[1947]: E0209 19:06:31.431894 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:31.943558 env[1392]: time="2024-02-09T19:06:31.943472050Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:31.944190 env[1392]: time="2024-02-09T19:06:31.944145354Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:31.944347 env[1392]: time="2024-02-09T19:06:31.944319055Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:31.944643 env[1392]: time="2024-02-09T19:06:31.944607257Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/81da7f935f8a70a69a40a36ae776bed9e8ecb3c5217637efeee325e3bbbe477d pid=3010 runtime=io.containerd.runc.v2 Feb 9 19:06:31.986812 systemd[1]: run-containerd-runc-k8s.io-81da7f935f8a70a69a40a36ae776bed9e8ecb3c5217637efeee325e3bbbe477d-runc.As2nIh.mount: Deactivated successfully. Feb 9 19:06:32.047605 env[1392]: time="2024-02-09T19:06:32.047562397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-8ffc5cf85-89ql5,Uid:81f2d2ed-c03d-4cbf-921c-48a8c9e0ec3a,Namespace:default,Attempt:0,} returns sandbox id \"81da7f935f8a70a69a40a36ae776bed9e8ecb3c5217637efeee325e3bbbe477d\"" Feb 9 19:06:32.049357 env[1392]: time="2024-02-09T19:06:32.049274007Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:06:32.433398 kubelet[1947]: E0209 19:06:32.433340 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:33.433621 kubelet[1947]: E0209 19:06:33.433558 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:34.434213 kubelet[1947]: E0209 19:06:34.434149 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:35.434660 kubelet[1947]: E0209 19:06:35.434518 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:35.521414 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1128247656.mount: Deactivated successfully. Feb 9 19:06:36.435719 kubelet[1947]: E0209 19:06:36.435665 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:36.503700 env[1392]: time="2024-02-09T19:06:36.503639946Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:36.510893 env[1392]: time="2024-02-09T19:06:36.510850087Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:36.516389 env[1392]: time="2024-02-09T19:06:36.516355418Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:36.521246 env[1392]: time="2024-02-09T19:06:36.521204446Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:36.521909 env[1392]: time="2024-02-09T19:06:36.521877149Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:06:36.523616 env[1392]: time="2024-02-09T19:06:36.523584459Z" level=info msg="CreateContainer within sandbox \"81da7f935f8a70a69a40a36ae776bed9e8ecb3c5217637efeee325e3bbbe477d\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 9 19:06:36.566930 env[1392]: time="2024-02-09T19:06:36.566870005Z" level=info msg="CreateContainer within sandbox \"81da7f935f8a70a69a40a36ae776bed9e8ecb3c5217637efeee325e3bbbe477d\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"53678f2b835ff9ad89bf54eaa61edc6f0153b7c864e7f9fd6398cb681453b675\"" Feb 9 19:06:36.567675 env[1392]: time="2024-02-09T19:06:36.567509109Z" level=info msg="StartContainer for \"53678f2b835ff9ad89bf54eaa61edc6f0153b7c864e7f9fd6398cb681453b675\"" Feb 9 19:06:36.597299 systemd[1]: run-containerd-runc-k8s.io-53678f2b835ff9ad89bf54eaa61edc6f0153b7c864e7f9fd6398cb681453b675-runc.q8TQ9M.mount: Deactivated successfully. Feb 9 19:06:36.629370 env[1392]: time="2024-02-09T19:06:36.629318761Z" level=info msg="StartContainer for \"53678f2b835ff9ad89bf54eaa61edc6f0153b7c864e7f9fd6398cb681453b675\" returns successfully" Feb 9 19:06:36.663199 kubelet[1947]: I0209 19:06:36.663163 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-8ffc5cf85-89ql5" podStartSLOduration=-9.223372027191643e+09 pod.CreationTimestamp="2024-02-09 19:06:27 +0000 UTC" firstStartedPulling="2024-02-09 19:06:32.048944305 +0000 UTC m=+46.242325636" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:36.663004552 +0000 UTC m=+50.856385983" watchObservedRunningTime="2024-02-09 19:06:36.663132653 +0000 UTC m=+50.856514084" Feb 9 19:06:37.436489 kubelet[1947]: E0209 19:06:37.436425 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:38.437038 kubelet[1947]: E0209 19:06:38.436948 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:39.437882 kubelet[1947]: E0209 19:06:39.437813 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:40.438843 kubelet[1947]: E0209 19:06:40.438791 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:41.439066 kubelet[1947]: E0209 19:06:41.439000 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:42.439735 kubelet[1947]: E0209 19:06:42.439671 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:43.440385 kubelet[1947]: E0209 19:06:43.440314 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:44.441032 kubelet[1947]: E0209 19:06:44.440963 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:44.737308 kubelet[1947]: I0209 19:06:44.737189 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:06:44.804267 kubelet[1947]: I0209 19:06:44.804231 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/475a6477-e446-453b-9b23-5d99dcf77b9b-data\") pod \"nfs-server-provisioner-0\" (UID: \"475a6477-e446-453b-9b23-5d99dcf77b9b\") " pod="default/nfs-server-provisioner-0" Feb 9 19:06:44.804472 kubelet[1947]: I0209 19:06:44.804310 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mnxx8\" (UniqueName: \"kubernetes.io/projected/475a6477-e446-453b-9b23-5d99dcf77b9b-kube-api-access-mnxx8\") pod \"nfs-server-provisioner-0\" (UID: \"475a6477-e446-453b-9b23-5d99dcf77b9b\") " pod="default/nfs-server-provisioner-0" Feb 9 19:06:45.043322 env[1392]: time="2024-02-09T19:06:45.043169985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:475a6477-e446-453b-9b23-5d99dcf77b9b,Namespace:default,Attempt:0,}" Feb 9 19:06:45.116849 systemd-networkd[1548]: lxc310fc9a92bc3: Link UP Feb 9 19:06:45.125876 kernel: eth0: renamed from tmpf1334 Feb 9 19:06:45.140058 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:06:45.140157 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc310fc9a92bc3: link becomes ready Feb 9 19:06:45.141372 systemd-networkd[1548]: lxc310fc9a92bc3: Gained carrier Feb 9 19:06:45.366926 env[1392]: time="2024-02-09T19:06:45.366837757Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:06:45.367155 env[1392]: time="2024-02-09T19:06:45.366903357Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:06:45.367155 env[1392]: time="2024-02-09T19:06:45.366917757Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:06:45.367270 env[1392]: time="2024-02-09T19:06:45.367182858Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f13341087fac53d7856293d3685500f2519a4f4cc700720bd0f69b3aca24f5a8 pid=3184 runtime=io.containerd.runc.v2 Feb 9 19:06:45.427586 env[1392]: time="2024-02-09T19:06:45.427534851Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:475a6477-e446-453b-9b23-5d99dcf77b9b,Namespace:default,Attempt:0,} returns sandbox id \"f13341087fac53d7856293d3685500f2519a4f4cc700720bd0f69b3aca24f5a8\"" Feb 9 19:06:45.429252 env[1392]: time="2024-02-09T19:06:45.429226359Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 9 19:06:45.442184 kubelet[1947]: E0209 19:06:45.442103 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:46.401343 kubelet[1947]: E0209 19:06:46.401274 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:46.443196 kubelet[1947]: E0209 19:06:46.443086 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:47.133174 systemd-networkd[1548]: lxc310fc9a92bc3: Gained IPv6LL Feb 9 19:06:47.444480 kubelet[1947]: E0209 19:06:47.444037 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:48.444381 kubelet[1947]: E0209 19:06:48.444326 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:48.550320 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1887088100.mount: Deactivated successfully. Feb 9 19:06:49.444679 kubelet[1947]: E0209 19:06:49.444635 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:50.444956 kubelet[1947]: E0209 19:06:50.444905 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:50.571691 env[1392]: time="2024-02-09T19:06:50.571638647Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.578458 env[1392]: time="2024-02-09T19:06:50.578416078Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.582781 env[1392]: time="2024-02-09T19:06:50.582736197Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.588363 env[1392]: time="2024-02-09T19:06:50.588330722Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:06:50.588980 env[1392]: time="2024-02-09T19:06:50.588948225Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 9 19:06:50.591186 env[1392]: time="2024-02-09T19:06:50.591156235Z" level=info msg="CreateContainer within sandbox \"f13341087fac53d7856293d3685500f2519a4f4cc700720bd0f69b3aca24f5a8\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 9 19:06:50.638968 env[1392]: time="2024-02-09T19:06:50.638907349Z" level=info msg="CreateContainer within sandbox \"f13341087fac53d7856293d3685500f2519a4f4cc700720bd0f69b3aca24f5a8\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"017804f3dd9e003b5895d8df5bb15f6628a90df3ba98d4bfbcec34baf146cb46\"" Feb 9 19:06:50.639598 env[1392]: time="2024-02-09T19:06:50.639567952Z" level=info msg="StartContainer for \"017804f3dd9e003b5895d8df5bb15f6628a90df3ba98d4bfbcec34baf146cb46\"" Feb 9 19:06:50.706367 env[1392]: time="2024-02-09T19:06:50.703697440Z" level=info msg="StartContainer for \"017804f3dd9e003b5895d8df5bb15f6628a90df3ba98d4bfbcec34baf146cb46\" returns successfully" Feb 9 19:06:51.446501 kubelet[1947]: E0209 19:06:51.446435 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:51.704239 kubelet[1947]: I0209 19:06:51.704090 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=-9.223372029150717e+09 pod.CreationTimestamp="2024-02-09 19:06:44 +0000 UTC" firstStartedPulling="2024-02-09 19:06:45.428745657 +0000 UTC m=+59.622126988" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:06:51.703581976 +0000 UTC m=+65.896963307" watchObservedRunningTime="2024-02-09 19:06:51.704059878 +0000 UTC m=+65.897441209" Feb 9 19:06:52.446738 kubelet[1947]: E0209 19:06:52.446667 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:53.447750 kubelet[1947]: E0209 19:06:53.447686 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:54.448227 kubelet[1947]: E0209 19:06:54.448164 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:55.449373 kubelet[1947]: E0209 19:06:55.449302 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:56.450527 kubelet[1947]: E0209 19:06:56.450460 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:57.451498 kubelet[1947]: E0209 19:06:57.451435 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:58.451719 kubelet[1947]: E0209 19:06:58.451653 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:06:59.452500 kubelet[1947]: E0209 19:06:59.452436 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:00.453492 kubelet[1947]: E0209 19:07:00.453426 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:00.850444 kubelet[1947]: I0209 19:07:00.849995 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:00.908614 kubelet[1947]: I0209 19:07:00.908574 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bgscb\" (UniqueName: \"kubernetes.io/projected/ccab479f-3279-466d-a7cc-73ffd671d6b9-kube-api-access-bgscb\") pod \"test-pod-1\" (UID: \"ccab479f-3279-466d-a7cc-73ffd671d6b9\") " pod="default/test-pod-1" Feb 9 19:07:00.908886 kubelet[1947]: I0209 19:07:00.908717 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-370823cf-3213-444f-8726-2159b6cfb3d6\" (UniqueName: \"kubernetes.io/nfs/ccab479f-3279-466d-a7cc-73ffd671d6b9-pvc-370823cf-3213-444f-8726-2159b6cfb3d6\") pod \"test-pod-1\" (UID: \"ccab479f-3279-466d-a7cc-73ffd671d6b9\") " pod="default/test-pod-1" Feb 9 19:07:01.060807 kernel: FS-Cache: Loaded Feb 9 19:07:01.108450 kernel: RPC: Registered named UNIX socket transport module. Feb 9 19:07:01.108581 kernel: RPC: Registered udp transport module. Feb 9 19:07:01.108609 kernel: RPC: Registered tcp transport module. Feb 9 19:07:01.115253 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 9 19:07:01.173796 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 9 19:07:01.358892 kernel: NFS: Registering the id_resolver key type Feb 9 19:07:01.359048 kernel: Key type id_resolver registered Feb 9 19:07:01.359076 kernel: Key type id_legacy registered Feb 9 19:07:01.444408 nfsidmap[3328]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-92fe98b439' Feb 9 19:07:01.454182 kubelet[1947]: E0209 19:07:01.454144 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:01.470731 nfsidmap[3329]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-92fe98b439' Feb 9 19:07:01.756722 env[1392]: time="2024-02-09T19:07:01.756561153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ccab479f-3279-466d-a7cc-73ffd671d6b9,Namespace:default,Attempt:0,}" Feb 9 19:07:01.810004 systemd-networkd[1548]: lxc42158e3e6b37: Link UP Feb 9 19:07:01.819896 kernel: eth0: renamed from tmpf8209 Feb 9 19:07:01.834245 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 9 19:07:01.834367 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc42158e3e6b37: link becomes ready Feb 9 19:07:01.834550 systemd-networkd[1548]: lxc42158e3e6b37: Gained carrier Feb 9 19:07:02.021991 env[1392]: time="2024-02-09T19:07:02.021859273Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:02.021991 env[1392]: time="2024-02-09T19:07:02.021902473Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:02.022404 env[1392]: time="2024-02-09T19:07:02.021916373Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:02.022717 env[1392]: time="2024-02-09T19:07:02.022670676Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/f8209d69d07af7ea2717458a49c2dabcbbdefccceadd04a6baad6ed397336a68 pid=3359 runtime=io.containerd.runc.v2 Feb 9 19:07:02.081412 env[1392]: time="2024-02-09T19:07:02.081368199Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:ccab479f-3279-466d-a7cc-73ffd671d6b9,Namespace:default,Attempt:0,} returns sandbox id \"f8209d69d07af7ea2717458a49c2dabcbbdefccceadd04a6baad6ed397336a68\"" Feb 9 19:07:02.083036 env[1392]: time="2024-02-09T19:07:02.083000806Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 9 19:07:02.454819 kubelet[1947]: E0209 19:07:02.454713 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:02.637255 env[1392]: time="2024-02-09T19:07:02.637206313Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:02.646376 env[1392]: time="2024-02-09T19:07:02.646328847Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:02.652644 env[1392]: time="2024-02-09T19:07:02.652606071Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:02.659435 env[1392]: time="2024-02-09T19:07:02.659403697Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:02.659981 env[1392]: time="2024-02-09T19:07:02.659947199Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 9 19:07:02.662347 env[1392]: time="2024-02-09T19:07:02.662315108Z" level=info msg="CreateContainer within sandbox \"f8209d69d07af7ea2717458a49c2dabcbbdefccceadd04a6baad6ed397336a68\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 9 19:07:02.708504 env[1392]: time="2024-02-09T19:07:02.708399983Z" level=info msg="CreateContainer within sandbox \"f8209d69d07af7ea2717458a49c2dabcbbdefccceadd04a6baad6ed397336a68\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"27254e91ac855415e1d4c3b5b98c505c30fb08c2da1f4fadabec2502014dd7ea\"" Feb 9 19:07:02.709331 env[1392]: time="2024-02-09T19:07:02.709301387Z" level=info msg="StartContainer for \"27254e91ac855415e1d4c3b5b98c505c30fb08c2da1f4fadabec2502014dd7ea\"" Feb 9 19:07:02.764305 env[1392]: time="2024-02-09T19:07:02.764255396Z" level=info msg="StartContainer for \"27254e91ac855415e1d4c3b5b98c505c30fb08c2da1f4fadabec2502014dd7ea\" returns successfully" Feb 9 19:07:03.455697 kubelet[1947]: E0209 19:07:03.455638 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:03.516988 systemd-networkd[1548]: lxc42158e3e6b37: Gained IPv6LL Feb 9 19:07:03.734197 kubelet[1947]: I0209 19:07:03.734065 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=-9.22337201912074e+09 pod.CreationTimestamp="2024-02-09 19:06:46 +0000 UTC" firstStartedPulling="2024-02-09 19:07:02.082706804 +0000 UTC m=+76.276088135" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:03.733749549 +0000 UTC m=+77.927130980" watchObservedRunningTime="2024-02-09 19:07:03.73403465 +0000 UTC m=+77.927416081" Feb 9 19:07:04.456300 kubelet[1947]: E0209 19:07:04.456230 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:05.456892 kubelet[1947]: E0209 19:07:05.456832 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:06.401444 kubelet[1947]: E0209 19:07:06.401404 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:06.458028 kubelet[1947]: E0209 19:07:06.457962 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:07.459101 kubelet[1947]: E0209 19:07:07.459041 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:08.459644 kubelet[1947]: E0209 19:07:08.459576 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:08.643971 systemd[1]: run-containerd-runc-k8s.io-32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a-runc.km5Z2Z.mount: Deactivated successfully. Feb 9 19:07:08.659402 env[1392]: time="2024-02-09T19:07:08.659326584Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 9 19:07:08.664883 env[1392]: time="2024-02-09T19:07:08.664845703Z" level=info msg="StopContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" with timeout 1 (s)" Feb 9 19:07:08.665170 env[1392]: time="2024-02-09T19:07:08.665133004Z" level=info msg="Stop container \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" with signal terminated" Feb 9 19:07:08.673075 systemd-networkd[1548]: lxc_health: Link DOWN Feb 9 19:07:08.673082 systemd-networkd[1548]: lxc_health: Lost carrier Feb 9 19:07:08.713123 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a-rootfs.mount: Deactivated successfully. Feb 9 19:07:09.460742 kubelet[1947]: E0209 19:07:09.460675 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:09.675676 env[1392]: time="2024-02-09T19:07:09.675603463Z" level=info msg="Kill container \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\"" Feb 9 19:07:10.460879 kubelet[1947]: E0209 19:07:10.460827 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:11.461105 kubelet[1947]: E0209 19:07:11.461044 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:11.499310 kubelet[1947]: E0209 19:07:11.499270 1947 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:07:11.747309 env[1392]: time="2024-02-09T19:07:11.746808940Z" level=info msg="shim disconnected" id=32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a Feb 9 19:07:11.747309 env[1392]: time="2024-02-09T19:07:11.746879840Z" level=warning msg="cleaning up after shim disconnected" id=32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a namespace=k8s.io Feb 9 19:07:11.747309 env[1392]: time="2024-02-09T19:07:11.746891740Z" level=info msg="cleaning up dead shim" Feb 9 19:07:11.755256 env[1392]: time="2024-02-09T19:07:11.755218169Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3490 runtime=io.containerd.runc.v2\n" Feb 9 19:07:11.760613 env[1392]: time="2024-02-09T19:07:11.760578387Z" level=info msg="StopContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" returns successfully" Feb 9 19:07:11.761238 env[1392]: time="2024-02-09T19:07:11.761207189Z" level=info msg="StopPodSandbox for \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\"" Feb 9 19:07:11.761358 env[1392]: time="2024-02-09T19:07:11.761276590Z" level=info msg="Container to stop \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:11.761358 env[1392]: time="2024-02-09T19:07:11.761298790Z" level=info msg="Container to stop \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:11.761358 env[1392]: time="2024-02-09T19:07:11.761314090Z" level=info msg="Container to stop \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:11.761358 env[1392]: time="2024-02-09T19:07:11.761329790Z" level=info msg="Container to stop \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:11.761358 env[1392]: time="2024-02-09T19:07:11.761346590Z" level=info msg="Container to stop \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 9 19:07:11.764429 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072-shm.mount: Deactivated successfully. Feb 9 19:07:11.789540 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072-rootfs.mount: Deactivated successfully. Feb 9 19:07:11.801944 env[1392]: time="2024-02-09T19:07:11.801878629Z" level=info msg="shim disconnected" id=24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072 Feb 9 19:07:11.801944 env[1392]: time="2024-02-09T19:07:11.801936829Z" level=warning msg="cleaning up after shim disconnected" id=24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072 namespace=k8s.io Feb 9 19:07:11.802157 env[1392]: time="2024-02-09T19:07:11.801949329Z" level=info msg="cleaning up dead shim" Feb 9 19:07:11.809975 env[1392]: time="2024-02-09T19:07:11.809940757Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:11Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3524 runtime=io.containerd.runc.v2\n" Feb 9 19:07:11.810282 env[1392]: time="2024-02-09T19:07:11.810247558Z" level=info msg="TearDown network for sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" successfully" Feb 9 19:07:11.810367 env[1392]: time="2024-02-09T19:07:11.810281058Z" level=info msg="StopPodSandbox for \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" returns successfully" Feb 9 19:07:11.974294 kubelet[1947]: I0209 19:07:11.974258 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hostproc\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974570 kubelet[1947]: I0209 19:07:11.974544 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-net\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974663 kubelet[1947]: I0209 19:07:11.974578 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-lib-modules\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974663 kubelet[1947]: I0209 19:07:11.974608 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hr8pq\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-kube-api-access-hr8pq\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974663 kubelet[1947]: I0209 19:07:11.974635 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hubble-tls\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974663 kubelet[1947]: I0209 19:07:11.974662 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-clustermesh-secrets\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974690 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-xtables-lock\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974715 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-run\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974744 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-etc-cni-netd\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974787 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-config-path\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974818 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-bpf-maps\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.974871 kubelet[1947]: I0209 19:07:11.974842 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-cgroup\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.975105 kubelet[1947]: I0209 19:07:11.974873 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cni-path\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.975105 kubelet[1947]: I0209 19:07:11.974905 1947 reconciler_common.go:169] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-kernel\") pod \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\" (UID: \"dc723f4c-6ab0-4f7a-a736-256c6ddc662a\") " Feb 9 19:07:11.975105 kubelet[1947]: I0209 19:07:11.974299 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hostproc" (OuterVolumeSpecName: "hostproc") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.975105 kubelet[1947]: I0209 19:07:11.975026 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.975105 kubelet[1947]: I0209 19:07:11.975050 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.976595 kubelet[1947]: I0209 19:07:11.975337 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.976595 kubelet[1947]: W0209 19:07:11.975869 1947 empty_dir.go:525] Warning: Failed to clear quota on /var/lib/kubelet/pods/dc723f4c-6ab0-4f7a-a736-256c6ddc662a/volumes/kubernetes.io~configmap/cilium-config-path: clearQuota called, but quotas disabled Feb 9 19:07:11.978304 kubelet[1947]: I0209 19:07:11.978270 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 9 19:07:11.978413 kubelet[1947]: I0209 19:07:11.978342 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.978413 kubelet[1947]: I0209 19:07:11.978366 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.978413 kubelet[1947]: I0209 19:07:11.978385 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cni-path" (OuterVolumeSpecName: "cni-path") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.978548 kubelet[1947]: I0209 19:07:11.978428 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.978548 kubelet[1947]: I0209 19:07:11.978452 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.978740 kubelet[1947]: I0209 19:07:11.978707 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 9 19:07:11.980342 kubelet[1947]: I0209 19:07:11.980318 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:11.984992 systemd[1]: var-lib-kubelet-pods-dc723f4c\x2d6ab0\x2d4f7a\x2da736\x2d256c6ddc662a-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 9 19:07:11.987819 systemd[1]: var-lib-kubelet-pods-dc723f4c\x2d6ab0\x2d4f7a\x2da736\x2d256c6ddc662a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhr8pq.mount: Deactivated successfully. Feb 9 19:07:11.987985 systemd[1]: var-lib-kubelet-pods-dc723f4c\x2d6ab0\x2d4f7a\x2da736\x2d256c6ddc662a-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 9 19:07:11.989031 kubelet[1947]: I0209 19:07:11.988965 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 9 19:07:11.989113 kubelet[1947]: I0209 19:07:11.989074 1947 operation_generator.go:900] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-kube-api-access-hr8pq" (OuterVolumeSpecName: "kube-api-access-hr8pq") pod "dc723f4c-6ab0-4f7a-a736-256c6ddc662a" (UID: "dc723f4c-6ab0-4f7a-a736-256c6ddc662a"). InnerVolumeSpecName "kube-api-access-hr8pq". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 9 19:07:12.076398 kubelet[1947]: I0209 19:07:12.076134 1947 reconciler_common.go:295] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hostproc\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.076398 kubelet[1947]: I0209 19:07:12.076246 1947 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-net\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.076398 kubelet[1947]: I0209 19:07:12.076289 1947 reconciler_common.go:295] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-lib-modules\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.076398 kubelet[1947]: I0209 19:07:12.076335 1947 reconciler_common.go:295] "Volume detached for volume \"kube-api-access-hr8pq\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-kube-api-access-hr8pq\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077134 kubelet[1947]: I0209 19:07:12.077112 1947 reconciler_common.go:295] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-hubble-tls\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077271 kubelet[1947]: I0209 19:07:12.077259 1947 reconciler_common.go:295] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-clustermesh-secrets\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077372 kubelet[1947]: I0209 19:07:12.077362 1947 reconciler_common.go:295] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-xtables-lock\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077461 kubelet[1947]: I0209 19:07:12.077452 1947 reconciler_common.go:295] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-run\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077548 kubelet[1947]: I0209 19:07:12.077539 1947 reconciler_common.go:295] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-etc-cni-netd\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077636 kubelet[1947]: I0209 19:07:12.077627 1947 reconciler_common.go:295] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-config-path\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077722 kubelet[1947]: I0209 19:07:12.077713 1947 reconciler_common.go:295] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-bpf-maps\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077839 kubelet[1947]: I0209 19:07:12.077828 1947 reconciler_common.go:295] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cilium-cgroup\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.077937 kubelet[1947]: I0209 19:07:12.077928 1947 reconciler_common.go:295] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-cni-path\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.078029 kubelet[1947]: I0209 19:07:12.078020 1947 reconciler_common.go:295] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/dc723f4c-6ab0-4f7a-a736-256c6ddc662a-host-proc-sys-kernel\") on node \"10.200.8.19\" DevicePath \"\"" Feb 9 19:07:12.300796 kubelet[1947]: I0209 19:07:12.300749 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:12.301075 kubelet[1947]: E0209 19:07:12.300839 1947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="mount-bpf-fs" Feb 9 19:07:12.301075 kubelet[1947]: E0209 19:07:12.300855 1947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="clean-cilium-state" Feb 9 19:07:12.301075 kubelet[1947]: E0209 19:07:12.300866 1947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="mount-cgroup" Feb 9 19:07:12.301075 kubelet[1947]: E0209 19:07:12.300876 1947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="apply-sysctl-overwrites" Feb 9 19:07:12.301075 kubelet[1947]: E0209 19:07:12.300886 1947 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="cilium-agent" Feb 9 19:07:12.301075 kubelet[1947]: I0209 19:07:12.300916 1947 memory_manager.go:346] "RemoveStaleState removing state" podUID="dc723f4c-6ab0-4f7a-a736-256c6ddc662a" containerName="cilium-agent" Feb 9 19:07:12.379748 kubelet[1947]: I0209 19:07:12.379704 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnlwm\" (UniqueName: \"kubernetes.io/projected/aafa435c-e0b6-47b4-8fba-40cfecc5f957-kube-api-access-qnlwm\") pod \"cilium-operator-f59cbd8c6-dtmf8\" (UID: \"aafa435c-e0b6-47b4-8fba-40cfecc5f957\") " pod="kube-system/cilium-operator-f59cbd8c6-dtmf8" Feb 9 19:07:12.379999 kubelet[1947]: I0209 19:07:12.379890 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/aafa435c-e0b6-47b4-8fba-40cfecc5f957-cilium-config-path\") pod \"cilium-operator-f59cbd8c6-dtmf8\" (UID: \"aafa435c-e0b6-47b4-8fba-40cfecc5f957\") " pod="kube-system/cilium-operator-f59cbd8c6-dtmf8" Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.413753 1377 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.413820 1377 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.413961 1377 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414451 1377 omaha_request_params.cc:62] Current group set to lts Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414649 1377 update_attempter.cc:499] Already updated boot flags. Skipping. Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414655 1377 update_attempter.cc:643] Scheduling an action processor start. Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414674 1377 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414703 1377 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414767 1377 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414793 1377 omaha_request_action.cc:271] Request: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.414799 1377 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.416103 1377 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:12.416320 update_engine[1377]: I0209 19:07:12.416282 1377 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:12.417387 locksmithd[1445]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Feb 9 19:07:12.462223 kubelet[1947]: E0209 19:07:12.462163 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:12.486183 update_engine[1377]: E0209 19:07:12.486140 1377 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:12.486347 update_engine[1377]: I0209 19:07:12.486279 1377 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Feb 9 19:07:12.605033 env[1392]: time="2024-02-09T19:07:12.604974070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dtmf8,Uid:aafa435c-e0b6-47b4-8fba-40cfecc5f957,Namespace:kube-system,Attempt:0,}" Feb 9 19:07:12.696071 env[1392]: time="2024-02-09T19:07:12.695919579Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:12.696071 env[1392]: time="2024-02-09T19:07:12.695966779Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:12.696071 env[1392]: time="2024-02-09T19:07:12.695980679Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:12.696579 env[1392]: time="2024-02-09T19:07:12.696504681Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/5451667d503db6102caaf3ad1039bbad55fdf70f2f0d24910e8a6eaa3842ea8f pid=3550 runtime=io.containerd.runc.v2 Feb 9 19:07:12.743441 kubelet[1947]: I0209 19:07:12.743409 1947 scope.go:115] "RemoveContainer" containerID="32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a" Feb 9 19:07:12.746924 env[1392]: time="2024-02-09T19:07:12.746748952Z" level=info msg="RemoveContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\"" Feb 9 19:07:12.756733 env[1392]: time="2024-02-09T19:07:12.756681786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-f59cbd8c6-dtmf8,Uid:aafa435c-e0b6-47b4-8fba-40cfecc5f957,Namespace:kube-system,Attempt:0,} returns sandbox id \"5451667d503db6102caaf3ad1039bbad55fdf70f2f0d24910e8a6eaa3842ea8f\"" Feb 9 19:07:12.759997 env[1392]: time="2024-02-09T19:07:12.759970397Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 9 19:07:12.760465 env[1392]: time="2024-02-09T19:07:12.760444199Z" level=info msg="RemoveContainer for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" returns successfully" Feb 9 19:07:12.760765 kubelet[1947]: I0209 19:07:12.760746 1947 scope.go:115] "RemoveContainer" containerID="479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65" Feb 9 19:07:12.764806 env[1392]: time="2024-02-09T19:07:12.762919007Z" level=info msg="RemoveContainer for \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\"" Feb 9 19:07:12.777927 env[1392]: time="2024-02-09T19:07:12.777896158Z" level=info msg="RemoveContainer for \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\" returns successfully" Feb 9 19:07:12.778072 kubelet[1947]: I0209 19:07:12.778045 1947 scope.go:115] "RemoveContainer" containerID="f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f" Feb 9 19:07:12.779051 env[1392]: time="2024-02-09T19:07:12.779018162Z" level=info msg="RemoveContainer for \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\"" Feb 9 19:07:12.787606 env[1392]: time="2024-02-09T19:07:12.787576891Z" level=info msg="RemoveContainer for \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\" returns successfully" Feb 9 19:07:12.787790 kubelet[1947]: I0209 19:07:12.787754 1947 scope.go:115] "RemoveContainer" containerID="3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1" Feb 9 19:07:12.788680 env[1392]: time="2024-02-09T19:07:12.788660995Z" level=info msg="RemoveContainer for \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\"" Feb 9 19:07:12.794749 env[1392]: time="2024-02-09T19:07:12.794727415Z" level=info msg="RemoveContainer for \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\" returns successfully" Feb 9 19:07:12.794991 kubelet[1947]: I0209 19:07:12.794932 1947 scope.go:115] "RemoveContainer" containerID="4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0" Feb 9 19:07:12.795864 env[1392]: time="2024-02-09T19:07:12.795828119Z" level=info msg="RemoveContainer for \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\"" Feb 9 19:07:12.802311 env[1392]: time="2024-02-09T19:07:12.802278941Z" level=info msg="RemoveContainer for \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\" returns successfully" Feb 9 19:07:12.803796 kubelet[1947]: I0209 19:07:12.802701 1947 scope.go:115] "RemoveContainer" containerID="32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a" Feb 9 19:07:12.808992 env[1392]: time="2024-02-09T19:07:12.808918063Z" level=error msg="ContainerStatus for \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\": not found" Feb 9 19:07:12.809230 kubelet[1947]: E0209 19:07:12.809209 1947 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\": not found" containerID="32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a" Feb 9 19:07:12.809314 kubelet[1947]: I0209 19:07:12.809249 1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a} err="failed to get container status \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\": rpc error: code = NotFound desc = an error occurred when try to find container \"32fc94efb0c87ad57d2090d122cd31a0bf8c19047bdfdf4fe5f99ad9039e1f5a\": not found" Feb 9 19:07:12.809314 kubelet[1947]: I0209 19:07:12.809265 1947 scope.go:115] "RemoveContainer" containerID="479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65" Feb 9 19:07:12.810965 env[1392]: time="2024-02-09T19:07:12.810907670Z" level=error msg="ContainerStatus for \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\": not found" Feb 9 19:07:12.811171 kubelet[1947]: E0209 19:07:12.811160 1947 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\": not found" containerID="479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65" Feb 9 19:07:12.811276 kubelet[1947]: I0209 19:07:12.811269 1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65} err="failed to get container status \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\": rpc error: code = NotFound desc = an error occurred when try to find container \"479e613285ab2e0ba65b78ae90da276c03ceeb534aaaca34b017e11bb18e5a65\": not found" Feb 9 19:07:12.811359 kubelet[1947]: I0209 19:07:12.811351 1947 scope.go:115] "RemoveContainer" containerID="f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f" Feb 9 19:07:12.811574 env[1392]: time="2024-02-09T19:07:12.811533372Z" level=error msg="ContainerStatus for \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\": not found" Feb 9 19:07:12.811746 kubelet[1947]: E0209 19:07:12.811731 1947 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\": not found" containerID="f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f" Feb 9 19:07:12.811847 kubelet[1947]: I0209 19:07:12.811760 1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f} err="failed to get container status \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\": rpc error: code = NotFound desc = an error occurred when try to find container \"f45160322cb47ae45098ef50848b8efe9b04575b68921ada3ef31c2696b99d7f\": not found" Feb 9 19:07:12.811847 kubelet[1947]: I0209 19:07:12.811790 1947 scope.go:115] "RemoveContainer" containerID="3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1" Feb 9 19:07:12.812001 env[1392]: time="2024-02-09T19:07:12.811949474Z" level=error msg="ContainerStatus for \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\": not found" Feb 9 19:07:12.812117 kubelet[1947]: E0209 19:07:12.812100 1947 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\": not found" containerID="3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1" Feb 9 19:07:12.812181 kubelet[1947]: I0209 19:07:12.812130 1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1} err="failed to get container status \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\": rpc error: code = NotFound desc = an error occurred when try to find container \"3cc0dcada8970de4c9b32c5482e8634b85132968bfe238f44f838dd5ae1f90c1\": not found" Feb 9 19:07:12.812181 kubelet[1947]: I0209 19:07:12.812143 1947 scope.go:115] "RemoveContainer" containerID="4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0" Feb 9 19:07:12.812348 env[1392]: time="2024-02-09T19:07:12.812301975Z" level=error msg="ContainerStatus for \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\": not found" Feb 9 19:07:12.812475 kubelet[1947]: E0209 19:07:12.812465 1947 remote_runtime.go:415] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\": not found" containerID="4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0" Feb 9 19:07:12.812569 kubelet[1947]: I0209 19:07:12.812561 1947 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={Type:containerd ID:4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0} err="failed to get container status \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\": rpc error: code = NotFound desc = an error occurred when try to find container \"4c5726b79a070bc4b34481a690b2f5844bee1fabf8b81cf07529caba754278f0\": not found" Feb 9 19:07:13.463095 kubelet[1947]: E0209 19:07:13.463039 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:13.537065 kubelet[1947]: I0209 19:07:13.537018 1947 topology_manager.go:210] "Topology Admit Handler" Feb 9 19:07:13.587142 kubelet[1947]: I0209 19:07:13.587093 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-host-proc-sys-kernel\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587461 kubelet[1947]: I0209 19:07:13.587426 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-bpf-maps\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587582 kubelet[1947]: I0209 19:07:13.587473 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-xtables-lock\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587582 kubelet[1947]: I0209 19:07:13.587512 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-etc-cni-netd\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587582 kubelet[1947]: I0209 19:07:13.587547 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/0f8908a6-8902-4599-b88e-37d7b2177881-hubble-tls\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587582 kubelet[1947]: I0209 19:07:13.587582 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-89b78\" (UniqueName: \"kubernetes.io/projected/0f8908a6-8902-4599-b88e-37d7b2177881-kube-api-access-89b78\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587854 kubelet[1947]: I0209 19:07:13.587621 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-host-proc-sys-net\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587854 kubelet[1947]: I0209 19:07:13.587657 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-cilium-run\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587854 kubelet[1947]: I0209 19:07:13.587701 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-cni-path\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587854 kubelet[1947]: I0209 19:07:13.587739 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-lib-modules\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.587854 kubelet[1947]: I0209 19:07:13.587823 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/0f8908a6-8902-4599-b88e-37d7b2177881-clustermesh-secrets\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.588126 kubelet[1947]: I0209 19:07:13.587865 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-hostproc\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.588126 kubelet[1947]: I0209 19:07:13.587910 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/0f8908a6-8902-4599-b88e-37d7b2177881-cilium-cgroup\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.588126 kubelet[1947]: I0209 19:07:13.587946 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/0f8908a6-8902-4599-b88e-37d7b2177881-cilium-config-path\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.588126 kubelet[1947]: I0209 19:07:13.587983 1947 reconciler_common.go:253] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/0f8908a6-8902-4599-b88e-37d7b2177881-cilium-ipsec-secrets\") pod \"cilium-bds6v\" (UID: \"0f8908a6-8902-4599-b88e-37d7b2177881\") " pod="kube-system/cilium-bds6v" Feb 9 19:07:13.841660 env[1392]: time="2024-02-09T19:07:13.841520049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bds6v,Uid:0f8908a6-8902-4599-b88e-37d7b2177881,Namespace:kube-system,Attempt:0,}" Feb 9 19:07:13.890639 env[1392]: time="2024-02-09T19:07:13.890564714Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 9 19:07:13.890912 env[1392]: time="2024-02-09T19:07:13.890610714Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 9 19:07:13.890912 env[1392]: time="2024-02-09T19:07:13.890624314Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 9 19:07:13.891115 env[1392]: time="2024-02-09T19:07:13.891047916Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03 pid=3598 runtime=io.containerd.runc.v2 Feb 9 19:07:13.941513 env[1392]: time="2024-02-09T19:07:13.941464286Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-bds6v,Uid:0f8908a6-8902-4599-b88e-37d7b2177881,Namespace:kube-system,Attempt:0,} returns sandbox id \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\"" Feb 9 19:07:13.944194 env[1392]: time="2024-02-09T19:07:13.944155495Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 9 19:07:13.997013 env[1392]: time="2024-02-09T19:07:13.996953872Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"9c737746efca09b0c35abc0ef01f99549ce37009e8d9b140abf56eaea9f4392a\"" Feb 9 19:07:13.997633 env[1392]: time="2024-02-09T19:07:13.997534974Z" level=info msg="StartContainer for \"9c737746efca09b0c35abc0ef01f99549ce37009e8d9b140abf56eaea9f4392a\"" Feb 9 19:07:14.052576 env[1392]: time="2024-02-09T19:07:14.051587755Z" level=info msg="StartContainer for \"9c737746efca09b0c35abc0ef01f99549ce37009e8d9b140abf56eaea9f4392a\" returns successfully" Feb 9 19:07:14.111028 env[1392]: time="2024-02-09T19:07:14.110967653Z" level=info msg="shim disconnected" id=9c737746efca09b0c35abc0ef01f99549ce37009e8d9b140abf56eaea9f4392a Feb 9 19:07:14.111028 env[1392]: time="2024-02-09T19:07:14.111033953Z" level=warning msg="cleaning up after shim disconnected" id=9c737746efca09b0c35abc0ef01f99549ce37009e8d9b140abf56eaea9f4392a namespace=k8s.io Feb 9 19:07:14.111379 env[1392]: time="2024-02-09T19:07:14.111046853Z" level=info msg="cleaning up dead shim" Feb 9 19:07:14.119831 env[1392]: time="2024-02-09T19:07:14.119761482Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:14Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3680 runtime=io.containerd.runc.v2\n" Feb 9 19:07:14.463765 kubelet[1947]: E0209 19:07:14.463645 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:14.555752 kubelet[1947]: I0209 19:07:14.555716 1947 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=dc723f4c-6ab0-4f7a-a736-256c6ddc662a path="/var/lib/kubelet/pods/dc723f4c-6ab0-4f7a-a736-256c6ddc662a/volumes" Feb 9 19:07:14.752693 env[1392]: time="2024-02-09T19:07:14.752536893Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 9 19:07:14.782966 env[1392]: time="2024-02-09T19:07:14.782906695Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da\"" Feb 9 19:07:14.783924 env[1392]: time="2024-02-09T19:07:14.783886198Z" level=info msg="StartContainer for \"fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da\"" Feb 9 19:07:14.866343 env[1392]: time="2024-02-09T19:07:14.866290873Z" level=info msg="StartContainer for \"fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da\" returns successfully" Feb 9 19:07:14.903130 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da-rootfs.mount: Deactivated successfully. Feb 9 19:07:15.172520 env[1392]: time="2024-02-09T19:07:15.172466589Z" level=info msg="shim disconnected" id=fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da Feb 9 19:07:15.172875 env[1392]: time="2024-02-09T19:07:15.172850990Z" level=warning msg="cleaning up after shim disconnected" id=fde804c8219b4fc109888f3938de8fbc915f846cdb312f4622a4c1bec33968da namespace=k8s.io Feb 9 19:07:15.172992 env[1392]: time="2024-02-09T19:07:15.172976691Z" level=info msg="cleaning up dead shim" Feb 9 19:07:15.195619 env[1392]: time="2024-02-09T19:07:15.195564465Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:15Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3738 runtime=io.containerd.runc.v2\n" Feb 9 19:07:15.464247 kubelet[1947]: E0209 19:07:15.464121 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:15.467729 env[1392]: time="2024-02-09T19:07:15.467679764Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:15.473229 env[1392]: time="2024-02-09T19:07:15.473175783Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:15.477404 env[1392]: time="2024-02-09T19:07:15.477358996Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 9 19:07:15.477949 env[1392]: time="2024-02-09T19:07:15.477908998Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 9 19:07:15.480093 env[1392]: time="2024-02-09T19:07:15.480058005Z" level=info msg="CreateContainer within sandbox \"5451667d503db6102caaf3ad1039bbad55fdf70f2f0d24910e8a6eaa3842ea8f\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 9 19:07:15.506378 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3890533213.mount: Deactivated successfully. Feb 9 19:07:15.519897 env[1392]: time="2024-02-09T19:07:15.519843937Z" level=info msg="CreateContainer within sandbox \"5451667d503db6102caaf3ad1039bbad55fdf70f2f0d24910e8a6eaa3842ea8f\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"cbfa8fa2b534e1c493463fa4f941d46838d2263ce3f56b644a943ae69b1b4394\"" Feb 9 19:07:15.520648 env[1392]: time="2024-02-09T19:07:15.520612639Z" level=info msg="StartContainer for \"cbfa8fa2b534e1c493463fa4f941d46838d2263ce3f56b644a943ae69b1b4394\"" Feb 9 19:07:15.571765 env[1392]: time="2024-02-09T19:07:15.571665308Z" level=info msg="StartContainer for \"cbfa8fa2b534e1c493463fa4f941d46838d2263ce3f56b644a943ae69b1b4394\" returns successfully" Feb 9 19:07:15.758569 env[1392]: time="2024-02-09T19:07:15.758457325Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 9 19:07:15.798533 env[1392]: time="2024-02-09T19:07:15.798482758Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"807735b452970763dfd52451fe4daae983f57e690b2890063b416e227dc7f682\"" Feb 9 19:07:15.799245 kubelet[1947]: I0209 19:07:15.799217 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-f59cbd8c6-dtmf8" podStartSLOduration=-9.223372033055595e+09 pod.CreationTimestamp="2024-02-09 19:07:12 +0000 UTC" firstStartedPulling="2024-02-09 19:07:12.759485695 +0000 UTC m=+86.952867026" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:15.771532769 +0000 UTC m=+89.964914200" watchObservedRunningTime="2024-02-09 19:07:15.79917986 +0000 UTC m=+89.992561691" Feb 9 19:07:15.800222 env[1392]: time="2024-02-09T19:07:15.800189363Z" level=info msg="StartContainer for \"807735b452970763dfd52451fe4daae983f57e690b2890063b416e227dc7f682\"" Feb 9 19:07:15.862273 env[1392]: time="2024-02-09T19:07:15.862223268Z" level=info msg="StartContainer for \"807735b452970763dfd52451fe4daae983f57e690b2890063b416e227dc7f682\" returns successfully" Feb 9 19:07:16.074718 env[1392]: time="2024-02-09T19:07:16.074551968Z" level=info msg="shim disconnected" id=807735b452970763dfd52451fe4daae983f57e690b2890063b416e227dc7f682 Feb 9 19:07:16.074718 env[1392]: time="2024-02-09T19:07:16.074613568Z" level=warning msg="cleaning up after shim disconnected" id=807735b452970763dfd52451fe4daae983f57e690b2890063b416e227dc7f682 namespace=k8s.io Feb 9 19:07:16.074718 env[1392]: time="2024-02-09T19:07:16.074626668Z" level=info msg="cleaning up dead shim" Feb 9 19:07:16.083544 env[1392]: time="2024-02-09T19:07:16.083483097Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3840 runtime=io.containerd.runc.v2\n" Feb 9 19:07:16.465332 kubelet[1947]: E0209 19:07:16.465276 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:16.500100 kubelet[1947]: E0209 19:07:16.500055 1947 kubelet.go:2475] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 9 19:07:16.763801 env[1392]: time="2024-02-09T19:07:16.763481423Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 9 19:07:16.823743 env[1392]: time="2024-02-09T19:07:16.823687220Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff\"" Feb 9 19:07:16.824335 env[1392]: time="2024-02-09T19:07:16.824302822Z" level=info msg="StartContainer for \"9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff\"" Feb 9 19:07:16.881346 env[1392]: time="2024-02-09T19:07:16.881294509Z" level=info msg="StartContainer for \"9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff\" returns successfully" Feb 9 19:07:16.897430 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff-rootfs.mount: Deactivated successfully. Feb 9 19:07:16.905686 env[1392]: time="2024-02-09T19:07:16.905631589Z" level=info msg="shim disconnected" id=9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff Feb 9 19:07:16.905686 env[1392]: time="2024-02-09T19:07:16.905688489Z" level=warning msg="cleaning up after shim disconnected" id=9ed0ba3c7fce5a76637e41b6afc7208a065fc25a94fa1fb60834ea1cc3ca1bff namespace=k8s.io Feb 9 19:07:16.905967 env[1392]: time="2024-02-09T19:07:16.905700189Z" level=info msg="cleaning up dead shim" Feb 9 19:07:16.913643 env[1392]: time="2024-02-09T19:07:16.913604115Z" level=warning msg="cleanup warnings time=\"2024-02-09T19:07:16Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3899 runtime=io.containerd.runc.v2\n" Feb 9 19:07:17.465492 kubelet[1947]: E0209 19:07:17.465420 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:17.769484 env[1392]: time="2024-02-09T19:07:17.769109693Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 9 19:07:17.804751 env[1392]: time="2024-02-09T19:07:17.804686809Z" level=info msg="CreateContainer within sandbox \"486767670482caddf08207ba21f5829d440da99e079b5386ea360171dc1e3c03\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07\"" Feb 9 19:07:17.805612 env[1392]: time="2024-02-09T19:07:17.805529112Z" level=info msg="StartContainer for \"3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07\"" Feb 9 19:07:17.878913 env[1392]: time="2024-02-09T19:07:17.878851249Z" level=info msg="StartContainer for \"3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07\" returns successfully" Feb 9 19:07:17.897591 systemd[1]: run-containerd-runc-k8s.io-3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07-runc.DveFw6.mount: Deactivated successfully. Feb 9 19:07:18.182797 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 9 19:07:18.466638 kubelet[1947]: E0209 19:07:18.466485 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:18.791014 kubelet[1947]: I0209 19:07:18.790612 1947 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-bds6v" podStartSLOduration=5.790578985 pod.CreationTimestamp="2024-02-09 19:07:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-09 19:07:18.789544482 +0000 UTC m=+92.982925813" watchObservedRunningTime="2024-02-09 19:07:18.790578985 +0000 UTC m=+92.983960316" Feb 9 19:07:19.467386 kubelet[1947]: E0209 19:07:19.467316 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:20.468070 kubelet[1947]: E0209 19:07:20.468024 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:20.645829 systemd-networkd[1548]: lxc_health: Link UP Feb 9 19:07:20.679309 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 9 19:07:20.679343 systemd-networkd[1548]: lxc_health: Gained carrier Feb 9 19:07:20.892154 systemd[1]: run-containerd-runc-k8s.io-3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07-runc.7qVme0.mount: Deactivated successfully. Feb 9 19:07:21.056873 kubelet[1947]: I0209 19:07:21.056117 1947 setters.go:548] "Node became not ready" node="10.200.8.19" condition={Type:Ready Status:False LastHeartbeatTime:2024-02-09 19:07:21.056049485 +0000 UTC m=+95.249430816 LastTransitionTime:2024-02-09 19:07:21.056049485 +0000 UTC m=+95.249430816 Reason:KubeletNotReady Message:container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized} Feb 9 19:07:21.469170 kubelet[1947]: E0209 19:07:21.469106 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:22.205142 systemd-networkd[1548]: lxc_health: Gained IPv6LL Feb 9 19:07:22.392509 update_engine[1377]: I0209 19:07:22.391853 1377 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:22.392509 update_engine[1377]: I0209 19:07:22.392174 1377 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:22.392509 update_engine[1377]: I0209 19:07:22.392450 1377 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:22.408174 update_engine[1377]: E0209 19:07:22.407997 1377 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:22.408174 update_engine[1377]: I0209 19:07:22.408134 1377 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Feb 9 19:07:22.469933 kubelet[1947]: E0209 19:07:22.469793 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:23.067959 systemd[1]: run-containerd-runc-k8s.io-3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07-runc.bqwvtt.mount: Deactivated successfully. Feb 9 19:07:23.470063 kubelet[1947]: E0209 19:07:23.469951 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:24.471056 kubelet[1947]: E0209 19:07:24.471015 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:25.273473 systemd[1]: run-containerd-runc-k8s.io-3ce6125b806e40bb960af2b3dc1f1abbcc500b3a19ee236ecbb2757b53d71e07-runc.8ewAJC.mount: Deactivated successfully. Feb 9 19:07:25.471873 kubelet[1947]: E0209 19:07:25.471815 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:26.400850 kubelet[1947]: E0209 19:07:26.400799 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:26.472344 kubelet[1947]: E0209 19:07:26.472280 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:27.472711 kubelet[1947]: E0209 19:07:27.472654 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:28.473909 kubelet[1947]: E0209 19:07:28.473848 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:29.474230 kubelet[1947]: E0209 19:07:29.474159 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:30.475343 kubelet[1947]: E0209 19:07:30.475284 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:31.476217 kubelet[1947]: E0209 19:07:31.476156 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:32.392385 update_engine[1377]: I0209 19:07:32.392294 1377 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:32.393048 update_engine[1377]: I0209 19:07:32.392657 1377 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:32.393048 update_engine[1377]: I0209 19:07:32.393008 1377 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:32.414368 update_engine[1377]: E0209 19:07:32.414305 1377 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:32.414580 update_engine[1377]: I0209 19:07:32.414482 1377 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Feb 9 19:07:32.477120 kubelet[1947]: E0209 19:07:32.477054 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:33.477720 kubelet[1947]: E0209 19:07:33.477654 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:34.478122 kubelet[1947]: E0209 19:07:34.478050 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:35.478837 kubelet[1947]: E0209 19:07:35.478758 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:36.479430 kubelet[1947]: E0209 19:07:36.479357 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:37.480373 kubelet[1947]: E0209 19:07:37.480304 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:38.481466 kubelet[1947]: E0209 19:07:38.481396 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:39.482102 kubelet[1947]: E0209 19:07:39.482034 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:40.482243 kubelet[1947]: E0209 19:07:40.482182 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:41.482481 kubelet[1947]: E0209 19:07:41.482413 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:42.392732 update_engine[1377]: I0209 19:07:42.392627 1377 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:42.393388 update_engine[1377]: I0209 19:07:42.393051 1377 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:42.393388 update_engine[1377]: I0209 19:07:42.393379 1377 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:42.401212 update_engine[1377]: E0209 19:07:42.401168 1377 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:42.401354 update_engine[1377]: I0209 19:07:42.401294 1377 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:07:42.401354 update_engine[1377]: I0209 19:07:42.401305 1377 omaha_request_action.cc:621] Omaha request response: Feb 9 19:07:42.401442 update_engine[1377]: E0209 19:07:42.401401 1377 omaha_request_action.cc:640] Omaha request network transfer failed. Feb 9 19:07:42.401442 update_engine[1377]: I0209 19:07:42.401419 1377 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Feb 9 19:07:42.401442 update_engine[1377]: I0209 19:07:42.401424 1377 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:42.401442 update_engine[1377]: I0209 19:07:42.401428 1377 update_attempter.cc:306] Processing Done. Feb 9 19:07:42.401578 update_engine[1377]: E0209 19:07:42.401448 1377 update_attempter.cc:619] Update failed. Feb 9 19:07:42.401578 update_engine[1377]: I0209 19:07:42.401453 1377 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Feb 9 19:07:42.401578 update_engine[1377]: I0209 19:07:42.401458 1377 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Feb 9 19:07:42.401578 update_engine[1377]: I0209 19:07:42.401463 1377 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Feb 9 19:07:42.401578 update_engine[1377]: I0209 19:07:42.401552 1377 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Feb 9 19:07:42.401578 update_engine[1377]: I0209 19:07:42.401575 1377 omaha_request_action.cc:270] Posting an Omaha request to disabled Feb 9 19:07:42.402036 update_engine[1377]: I0209 19:07:42.401580 1377 omaha_request_action.cc:271] Request: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: Feb 9 19:07:42.402036 update_engine[1377]: I0209 19:07:42.401587 1377 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Feb 9 19:07:42.402036 update_engine[1377]: I0209 19:07:42.401746 1377 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Feb 9 19:07:42.402036 update_engine[1377]: I0209 19:07:42.401942 1377 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Feb 9 19:07:42.402341 locksmithd[1445]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Feb 9 19:07:42.407709 update_engine[1377]: E0209 19:07:42.407682 1377 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407791 1377 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407801 1377 omaha_request_action.cc:621] Omaha request response: Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407807 1377 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407811 1377 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407815 1377 update_attempter.cc:306] Processing Done. Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407821 1377 update_attempter.cc:310] Error event sent. Feb 9 19:07:42.407834 update_engine[1377]: I0209 19:07:42.407831 1377 update_check_scheduler.cc:74] Next update check in 49m3s Feb 9 19:07:42.408208 locksmithd[1445]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Feb 9 19:07:42.483264 kubelet[1947]: E0209 19:07:42.483194 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:43.484239 kubelet[1947]: E0209 19:07:43.484170 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:44.484520 kubelet[1947]: E0209 19:07:44.484455 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:45.485303 kubelet[1947]: E0209 19:07:45.485224 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:46.401604 kubelet[1947]: E0209 19:07:46.401511 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:46.418232 env[1392]: time="2024-02-09T19:07:46.418182225Z" level=info msg="StopPodSandbox for \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\"" Feb 9 19:07:46.418791 env[1392]: time="2024-02-09T19:07:46.418713826Z" level=info msg="TearDown network for sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" successfully" Feb 9 19:07:46.418791 env[1392]: time="2024-02-09T19:07:46.418767527Z" level=info msg="StopPodSandbox for \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" returns successfully" Feb 9 19:07:46.419301 env[1392]: time="2024-02-09T19:07:46.419268628Z" level=info msg="RemovePodSandbox for \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\"" Feb 9 19:07:46.419409 env[1392]: time="2024-02-09T19:07:46.419305428Z" level=info msg="Forcibly stopping sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\"" Feb 9 19:07:46.419409 env[1392]: time="2024-02-09T19:07:46.419398028Z" level=info msg="TearDown network for sandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" successfully" Feb 9 19:07:46.429804 env[1392]: time="2024-02-09T19:07:46.429758256Z" level=info msg="RemovePodSandbox \"24c20764114de65233d6a8b8f9cbc5f14de83d5937cc622d0099b4a9dcf90072\" returns successfully" Feb 9 19:07:46.485795 kubelet[1947]: E0209 19:07:46.485758 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:47.486225 kubelet[1947]: E0209 19:07:47.486118 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:48.486461 kubelet[1947]: E0209 19:07:48.486393 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:49.487311 kubelet[1947]: E0209 19:07:49.487243 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:50.488380 kubelet[1947]: E0209 19:07:50.488309 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:50.945107 kubelet[1947]: E0209 19:07:50.945053 1947 controller.go:189] failed to update lease, error: rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.38:59340->10.200.8.27:2379: read: connection timed out Feb 9 19:07:51.488737 kubelet[1947]: E0209 19:07:51.488665 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:52.489255 kubelet[1947]: E0209 19:07:52.489184 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:53.489869 kubelet[1947]: E0209 19:07:53.489813 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:54.490226 kubelet[1947]: E0209 19:07:54.490158 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:55.490969 kubelet[1947]: E0209 19:07:55.490900 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:56.491506 kubelet[1947]: E0209 19:07:56.491435 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:57.492618 kubelet[1947]: E0209 19:07:57.492551 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:58.493569 kubelet[1947]: E0209 19:07:58.493501 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:07:59.494519 kubelet[1947]: E0209 19:07:59.494452 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:00.495681 kubelet[1947]: E0209 19:08:00.495612 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:00.945858 kubelet[1947]: E0209 19:08:00.945767 1947 controller.go:189] failed to update lease, error: Put "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.19?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:01.496821 kubelet[1947]: E0209 19:08:01.496739 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:02.498018 kubelet[1947]: E0209 19:08:02.497958 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:03.498756 kubelet[1947]: E0209 19:08:03.498693 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:04.499746 kubelet[1947]: E0209 19:08:04.499697 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:05.500574 kubelet[1947]: E0209 19:08:05.500501 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:06.401106 kubelet[1947]: E0209 19:08:06.401053 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:06.500811 kubelet[1947]: E0209 19:08:06.500712 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:07.501436 kubelet[1947]: E0209 19:08:07.501372 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:08.501995 kubelet[1947]: E0209 19:08:08.501950 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:09.502479 kubelet[1947]: E0209 19:08:09.502415 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:10.503167 kubelet[1947]: E0209 19:08:10.503096 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:10.947175 kubelet[1947]: E0209 19:08:10.947020 1947 controller.go:189] failed to update lease, error: Put "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.19?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:11.504102 kubelet[1947]: E0209 19:08:11.504030 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:12.504814 kubelet[1947]: E0209 19:08:12.504749 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:13.505460 kubelet[1947]: E0209 19:08:13.505395 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:14.506163 kubelet[1947]: E0209 19:08:14.506093 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:15.506360 kubelet[1947]: E0209 19:08:15.506299 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:16.507358 kubelet[1947]: E0209 19:08:16.507295 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:17.508201 kubelet[1947]: E0209 19:08:17.508136 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:18.508975 kubelet[1947]: E0209 19:08:18.508909 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:19.509870 kubelet[1947]: E0209 19:08:19.509813 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:20.510714 kubelet[1947]: E0209 19:08:20.510643 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:20.947939 kubelet[1947]: E0209 19:08:20.947879 1947 controller.go:189] failed to update lease, error: Put "https://10.200.8.38:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.19?timeout=10s": net/http: request canceled (Client.Timeout exceeded while awaiting headers) Feb 9 19:08:21.511628 kubelet[1947]: E0209 19:08:21.511555 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:21.975030 kubelet[1947]: E0209 19:08:21.974975 1947 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.19\": Get \"https://10.200.8.38:6443/api/v1/nodes/10.200.8.19?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 9 19:08:22.512701 kubelet[1947]: E0209 19:08:22.512640 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:23.513309 kubelet[1947]: E0209 19:08:23.513239 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:24.513527 kubelet[1947]: E0209 19:08:24.513461 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:25.514591 kubelet[1947]: E0209 19:08:25.514526 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:26.342375 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.354579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.366526 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.379060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.391860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.405931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.406216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.411719 kubelet[1947]: E0209 19:08:26.411677 1947 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:26.417680 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.417982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.428347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.450049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.450205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.450354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.450491 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.450624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.466572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.482871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.483028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.483159 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.483280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.483411 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.493513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.509527 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.509678 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.509828 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.509961 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.515639 kubelet[1947]: E0209 19:08:26.515557 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:26.528653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.529152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.529337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.539844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.540221 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.550465 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.566744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.566941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.567078 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.567218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.582492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.618877 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619070 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619210 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619339 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619471 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619601 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.619871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.620002 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.636887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.653576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.653738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.653891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.654025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.654160 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.664882 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.665190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.675953 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.676817 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.688699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.688965 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.699788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.705385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.705640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.715345 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.715583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.725759 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.726029 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.737617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.743291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.791395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.802583 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.802735 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.802894 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803288 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803418 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803542 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803825 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.803945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.813405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.819129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.819269 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.829369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.851663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.851832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.857362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.857517 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.857654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.857801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.873675 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.909492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.909689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.909852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.909992 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.910125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.910258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.910391 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.910518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.910640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.925096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.925449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.925588 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.935647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.962731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.962921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.963058 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.963195 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.963332 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.963459 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.989802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.990174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.998936 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:26.999229 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.009094 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.009341 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.018696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.018979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.028800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.034286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.078494 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084343 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084732 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.084887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.085018 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.085165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.094705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.135790 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136311 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.136989 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.151664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.187659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.187900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188043 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188747 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.188886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.206136 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.206488 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.206640 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.218208 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.239250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.239387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.239513 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.239644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.239768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.254617 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.290463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.290716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.290879 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291008 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291184 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291569 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.291696 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.301379 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.312523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.312796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.312938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.334928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.335239 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.335380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.345507 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.356820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.356971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.357107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.366913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.408305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.408546 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.408700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.408857 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.408986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.409117 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.409247 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.409373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.409497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.425303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.462384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.462570 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.462710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.462875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.463015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.463150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.463280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.463405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.463529 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.472839 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.493194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.513284 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.513432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.513586 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.513765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.513947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.514098 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.514355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.514485 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.516432 kubelet[1947]: E0209 19:08:27.516372 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:27.529095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566571 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566701 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566842 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.566976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.567103 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.567231 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.571301 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.587048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.624815 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.646795 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.646970 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647396 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647642 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.647909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.648033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.648158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.648286 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.662756 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.663158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.663299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.673177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.695031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.695257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.695399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.695539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.695671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.705579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.716081 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.740545 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.745766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.745925 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.746038 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.746147 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.746252 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.746357 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.759539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.759971 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.760273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.771015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.810525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.810718 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.810866 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.810998 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.811128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.811257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.811374 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.828738 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.849708 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.849886 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.850020 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.850152 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.850294 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.850431 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.859827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.870306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.875449 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.875585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.875715 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.890849 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.906175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.933643 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.933805 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.933946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.934076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.934205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.934335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.934466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.934591 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.949650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.978713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.988942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989246 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989632 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:27.989966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.004622 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.042530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.042744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.042897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.043677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.058624 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.100434 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.100606 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.100740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.100885 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.101015 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.101134 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.101265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.101392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.101519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.116520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.143055 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.150516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.150668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.150820 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.150947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.151079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.151206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.151337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.161417 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.172620 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.198122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230377 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230812 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.230945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231072 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.231697 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.241105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.251636 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.251852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.251983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.263466 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.289644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.331273 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.331541 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.331682 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.331831 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.331968 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332504 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332785 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.332919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.333047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.347354 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385650 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.385897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.386013 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.386118 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.403114 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.433597 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.433751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.433891 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.434021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.434146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.434272 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.434395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.434536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.443146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.443435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.453688 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.487860 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.504875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505575 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.505973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.506102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.516791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.517197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.517371 kubelet[1947]: E0209 19:08:28.516842 1947 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 9 19:08:28.527630 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.563326 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.563514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.563654 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.563816 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.563956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.564084 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.564215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.564342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.573409 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.617709 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.617926 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618069 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618206 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618869 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 9 19:08:28.618999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#285 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001