Feb 8 23:37:56.020890 kernel: Linux version 5.15.148-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 11.3.1_p20221209 p3) 11.3.1 20221209, GNU ld (Gentoo 2.39 p5) 2.39.0) #1 SMP Thu Feb 8 21:14:17 -00 2024 Feb 8 23:37:56.020921 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:37:56.020933 kernel: BIOS-provided physical RAM map: Feb 8 23:37:56.020942 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Feb 8 23:37:56.020951 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Feb 8 23:37:56.020961 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Feb 8 23:37:56.020974 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved Feb 8 23:37:56.020984 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Feb 8 23:37:56.020994 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Feb 8 23:37:56.021003 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Feb 8 23:37:56.021013 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Feb 8 23:37:56.021023 kernel: printk: bootconsole [earlyser0] enabled Feb 8 23:37:56.021032 kernel: NX (Execute Disable) protection: active Feb 8 23:37:56.021042 kernel: efi: EFI v2.70 by Microsoft Feb 8 23:37:56.021059 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 RNG=0x3ffd1018 Feb 8 23:37:56.021071 kernel: random: crng init done Feb 8 23:37:56.021082 kernel: SMBIOS 3.1.0 present. Feb 8 23:37:56.021094 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 07/12/2023 Feb 8 23:37:56.021105 kernel: Hypervisor detected: Microsoft Hyper-V Feb 8 23:37:56.021117 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Feb 8 23:37:56.021129 kernel: Hyper-V Host Build:20348-10.0-1-0.1544 Feb 8 23:37:56.021140 kernel: Hyper-V: Nested features: 0x1e0101 Feb 8 23:37:56.021152 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Feb 8 23:37:56.021162 kernel: Hyper-V: Using hypercall for remote TLB flush Feb 8 23:37:56.021171 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Feb 8 23:37:56.021181 kernel: tsc: Marking TSC unstable due to running on Hyper-V Feb 8 23:37:56.021192 kernel: tsc: Detected 2593.906 MHz processor Feb 8 23:37:56.021205 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Feb 8 23:37:56.021217 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Feb 8 23:37:56.021227 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Feb 8 23:37:56.021239 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Feb 8 23:37:56.021252 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Feb 8 23:37:56.021266 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Feb 8 23:37:56.021278 kernel: Using GB pages for direct mapping Feb 8 23:37:56.021291 kernel: Secure boot disabled Feb 8 23:37:56.021301 kernel: ACPI: Early table checksum verification disabled Feb 8 23:37:56.021313 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Feb 8 23:37:56.021326 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021338 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021351 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Feb 8 23:37:56.021372 kernel: ACPI: FACS 0x000000003FFFE000 000040 Feb 8 23:37:56.021383 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021396 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021408 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021421 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021433 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021446 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021469 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Feb 8 23:37:56.021480 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Feb 8 23:37:56.021491 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Feb 8 23:37:56.021502 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Feb 8 23:37:56.021513 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Feb 8 23:37:56.021525 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Feb 8 23:37:56.021537 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Feb 8 23:37:56.021553 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Feb 8 23:37:56.021566 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Feb 8 23:37:56.021578 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Feb 8 23:37:56.021591 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Feb 8 23:37:56.021604 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Feb 8 23:37:56.021616 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Feb 8 23:37:56.021629 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Feb 8 23:37:56.021642 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Feb 8 23:37:56.021654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Feb 8 23:37:56.021670 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Feb 8 23:37:56.021683 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Feb 8 23:37:56.021695 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Feb 8 23:37:56.021708 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Feb 8 23:37:56.021720 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Feb 8 23:37:56.021732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Feb 8 23:37:56.021744 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Feb 8 23:37:56.021757 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Feb 8 23:37:56.021770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Feb 8 23:37:56.021786 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Feb 8 23:37:56.021799 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Feb 8 23:37:56.021812 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Feb 8 23:37:56.021824 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Feb 8 23:37:56.021838 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Feb 8 23:37:56.021850 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Feb 8 23:37:56.021863 kernel: Zone ranges: Feb 8 23:37:56.021876 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Feb 8 23:37:56.021889 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Feb 8 23:37:56.021904 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:37:56.021916 kernel: Movable zone start for each node Feb 8 23:37:56.021929 kernel: Early memory node ranges Feb 8 23:37:56.021942 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Feb 8 23:37:56.021955 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Feb 8 23:37:56.021967 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Feb 8 23:37:56.021980 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Feb 8 23:37:56.021993 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Feb 8 23:37:56.022006 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Feb 8 23:37:56.022021 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Feb 8 23:37:56.022033 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Feb 8 23:37:56.022046 kernel: ACPI: PM-Timer IO Port: 0x408 Feb 8 23:37:56.022058 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Feb 8 23:37:56.022071 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Feb 8 23:37:56.022084 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Feb 8 23:37:56.022096 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Feb 8 23:37:56.022109 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Feb 8 23:37:56.022122 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Feb 8 23:37:56.022137 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Feb 8 23:37:56.022150 kernel: Booting paravirtualized kernel on Hyper-V Feb 8 23:37:56.022163 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Feb 8 23:37:56.022176 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:512 nr_cpu_ids:2 nr_node_ids:1 Feb 8 23:37:56.022189 kernel: percpu: Embedded 55 pages/cpu s185624 r8192 d31464 u1048576 Feb 8 23:37:56.022202 kernel: pcpu-alloc: s185624 r8192 d31464 u1048576 alloc=1*2097152 Feb 8 23:37:56.022215 kernel: pcpu-alloc: [0] 0 1 Feb 8 23:37:56.022227 kernel: Hyper-V: PV spinlocks enabled Feb 8 23:37:56.022239 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Feb 8 23:37:56.022255 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Feb 8 23:37:56.022268 kernel: Policy zone: Normal Feb 8 23:37:56.022282 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:37:56.022296 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Feb 8 23:37:56.022308 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Feb 8 23:37:56.022321 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Feb 8 23:37:56.022334 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Feb 8 23:37:56.022347 kernel: Memory: 8081200K/8387460K available (12294K kernel code, 2275K rwdata, 13700K rodata, 45496K init, 4048K bss, 306000K reserved, 0K cma-reserved) Feb 8 23:37:56.022363 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Feb 8 23:37:56.022376 kernel: ftrace: allocating 34475 entries in 135 pages Feb 8 23:37:56.022398 kernel: ftrace: allocated 135 pages with 4 groups Feb 8 23:37:56.022415 kernel: rcu: Hierarchical RCU implementation. Feb 8 23:37:56.022429 kernel: rcu: RCU event tracing is enabled. Feb 8 23:37:56.022442 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Feb 8 23:37:56.025496 kernel: Rude variant of Tasks RCU enabled. Feb 8 23:37:56.025516 kernel: Tracing variant of Tasks RCU enabled. Feb 8 23:37:56.025530 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Feb 8 23:37:56.025544 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Feb 8 23:37:56.025558 kernel: Using NULL legacy PIC Feb 8 23:37:56.025575 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Feb 8 23:37:56.025589 kernel: Console: colour dummy device 80x25 Feb 8 23:37:56.025602 kernel: printk: console [tty1] enabled Feb 8 23:37:56.025616 kernel: printk: console [ttyS0] enabled Feb 8 23:37:56.025629 kernel: printk: bootconsole [earlyser0] disabled Feb 8 23:37:56.025644 kernel: ACPI: Core revision 20210730 Feb 8 23:37:56.025658 kernel: Failed to register legacy timer interrupt Feb 8 23:37:56.025671 kernel: APIC: Switch to symmetric I/O mode setup Feb 8 23:37:56.025684 kernel: Hyper-V: Using IPI hypercalls Feb 8 23:37:56.025697 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593906) Feb 8 23:37:56.025711 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Feb 8 23:37:56.025724 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Feb 8 23:37:56.025738 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Feb 8 23:37:56.025751 kernel: Spectre V2 : Mitigation: Retpolines Feb 8 23:37:56.025764 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Feb 8 23:37:56.025779 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Feb 8 23:37:56.025793 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Feb 8 23:37:56.025806 kernel: RETBleed: Vulnerable Feb 8 23:37:56.025819 kernel: Speculative Store Bypass: Vulnerable Feb 8 23:37:56.025832 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:37:56.025844 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Feb 8 23:37:56.025857 kernel: GDS: Unknown: Dependent on hypervisor status Feb 8 23:37:56.025870 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Feb 8 23:37:56.025883 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Feb 8 23:37:56.025896 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Feb 8 23:37:56.025912 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Feb 8 23:37:56.025925 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Feb 8 23:37:56.025938 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Feb 8 23:37:56.025951 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Feb 8 23:37:56.025964 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Feb 8 23:37:56.025977 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Feb 8 23:37:56.025990 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Feb 8 23:37:56.026003 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Feb 8 23:37:56.026016 kernel: Freeing SMP alternatives memory: 32K Feb 8 23:37:56.026029 kernel: pid_max: default: 32768 minimum: 301 Feb 8 23:37:56.026042 kernel: LSM: Security Framework initializing Feb 8 23:37:56.026055 kernel: SELinux: Initializing. Feb 8 23:37:56.026070 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:37:56.026083 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Feb 8 23:37:56.026097 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Feb 8 23:37:56.026110 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Feb 8 23:37:56.026124 kernel: signal: max sigframe size: 3632 Feb 8 23:37:56.026137 kernel: rcu: Hierarchical SRCU implementation. Feb 8 23:37:56.026150 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Feb 8 23:37:56.026163 kernel: smp: Bringing up secondary CPUs ... Feb 8 23:37:56.026177 kernel: x86: Booting SMP configuration: Feb 8 23:37:56.026190 kernel: .... node #0, CPUs: #1 Feb 8 23:37:56.026206 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Feb 8 23:37:56.026220 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Feb 8 23:37:56.026233 kernel: smp: Brought up 1 node, 2 CPUs Feb 8 23:37:56.026246 kernel: smpboot: Max logical packages: 1 Feb 8 23:37:56.026260 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Feb 8 23:37:56.026273 kernel: devtmpfs: initialized Feb 8 23:37:56.026286 kernel: x86/mm: Memory block size: 128MB Feb 8 23:37:56.026299 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Feb 8 23:37:56.026315 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Feb 8 23:37:56.026329 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Feb 8 23:37:56.026342 kernel: pinctrl core: initialized pinctrl subsystem Feb 8 23:37:56.026355 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Feb 8 23:37:56.026368 kernel: audit: initializing netlink subsys (disabled) Feb 8 23:37:56.026381 kernel: audit: type=2000 audit(1707435474.023:1): state=initialized audit_enabled=0 res=1 Feb 8 23:37:56.026394 kernel: thermal_sys: Registered thermal governor 'step_wise' Feb 8 23:37:56.026407 kernel: thermal_sys: Registered thermal governor 'user_space' Feb 8 23:37:56.026421 kernel: cpuidle: using governor menu Feb 8 23:37:56.026437 kernel: ACPI: bus type PCI registered Feb 8 23:37:56.026450 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Feb 8 23:37:56.026477 kernel: dca service started, version 1.12.1 Feb 8 23:37:56.026490 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Feb 8 23:37:56.026503 kernel: HugeTLB registered 1.00 GiB page size, pre-allocated 0 pages Feb 8 23:37:56.026517 kernel: HugeTLB registered 2.00 MiB page size, pre-allocated 0 pages Feb 8 23:37:56.026530 kernel: ACPI: Added _OSI(Module Device) Feb 8 23:37:56.026543 kernel: ACPI: Added _OSI(Processor Device) Feb 8 23:37:56.026556 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Feb 8 23:37:56.026572 kernel: ACPI: Added _OSI(Processor Aggregator Device) Feb 8 23:37:56.026586 kernel: ACPI: Added _OSI(Linux-Dell-Video) Feb 8 23:37:56.026599 kernel: ACPI: Added _OSI(Linux-Lenovo-NV-HDMI-Audio) Feb 8 23:37:56.026612 kernel: ACPI: Added _OSI(Linux-HPI-Hybrid-Graphics) Feb 8 23:37:56.026625 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Feb 8 23:37:56.026638 kernel: ACPI: Interpreter enabled Feb 8 23:37:56.026651 kernel: ACPI: PM: (supports S0 S5) Feb 8 23:37:56.026664 kernel: ACPI: Using IOAPIC for interrupt routing Feb 8 23:37:56.026677 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Feb 8 23:37:56.026693 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Feb 8 23:37:56.026706 kernel: iommu: Default domain type: Translated Feb 8 23:37:56.026719 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Feb 8 23:37:56.026733 kernel: vgaarb: loaded Feb 8 23:37:56.026746 kernel: pps_core: LinuxPPS API ver. 1 registered Feb 8 23:37:56.026759 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Feb 8 23:37:56.026773 kernel: PTP clock support registered Feb 8 23:37:56.026786 kernel: Registered efivars operations Feb 8 23:37:56.026799 kernel: PCI: Using ACPI for IRQ routing Feb 8 23:37:56.026812 kernel: PCI: System does not support PCI Feb 8 23:37:56.026828 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Feb 8 23:37:56.026841 kernel: VFS: Disk quotas dquot_6.6.0 Feb 8 23:37:56.026854 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Feb 8 23:37:56.026868 kernel: pnp: PnP ACPI init Feb 8 23:37:56.026881 kernel: pnp: PnP ACPI: found 3 devices Feb 8 23:37:56.026894 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Feb 8 23:37:56.026907 kernel: NET: Registered PF_INET protocol family Feb 8 23:37:56.026921 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Feb 8 23:37:56.026936 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Feb 8 23:37:56.026949 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Feb 8 23:37:56.026963 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Feb 8 23:37:56.026976 kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear) Feb 8 23:37:56.026989 kernel: TCP: Hash tables configured (established 65536 bind 65536) Feb 8 23:37:56.027002 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:37:56.027016 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Feb 8 23:37:56.027029 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Feb 8 23:37:56.027042 kernel: NET: Registered PF_XDP protocol family Feb 8 23:37:56.027058 kernel: PCI: CLS 0 bytes, default 64 Feb 8 23:37:56.027071 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Feb 8 23:37:56.027085 kernel: software IO TLB: mapped [mem 0x000000003a8ad000-0x000000003e8ad000] (64MB) Feb 8 23:37:56.027098 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Feb 8 23:37:56.027111 kernel: Initialise system trusted keyrings Feb 8 23:37:56.027124 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Feb 8 23:37:56.027137 kernel: Key type asymmetric registered Feb 8 23:37:56.027150 kernel: Asymmetric key parser 'x509' registered Feb 8 23:37:56.027163 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Feb 8 23:37:56.027178 kernel: io scheduler mq-deadline registered Feb 8 23:37:56.027191 kernel: io scheduler kyber registered Feb 8 23:37:56.027204 kernel: io scheduler bfq registered Feb 8 23:37:56.027218 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Feb 8 23:37:56.027231 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Feb 8 23:37:56.027244 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Feb 8 23:37:56.027258 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Feb 8 23:37:56.027271 kernel: i8042: PNP: No PS/2 controller found. Feb 8 23:37:56.027435 kernel: rtc_cmos 00:02: registered as rtc0 Feb 8 23:37:56.027560 kernel: rtc_cmos 00:02: setting system clock to 2024-02-08T23:37:55 UTC (1707435475) Feb 8 23:37:56.027665 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Feb 8 23:37:56.027681 kernel: fail to initialize ptp_kvm Feb 8 23:37:56.027695 kernel: intel_pstate: CPU model not supported Feb 8 23:37:56.027708 kernel: efifb: probing for efifb Feb 8 23:37:56.027722 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Feb 8 23:37:56.027735 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Feb 8 23:37:56.027748 kernel: efifb: scrolling: redraw Feb 8 23:37:56.027765 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Feb 8 23:37:56.027778 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:37:56.027791 kernel: fb0: EFI VGA frame buffer device Feb 8 23:37:56.027804 kernel: pstore: Registered efi as persistent store backend Feb 8 23:37:56.027821 kernel: NET: Registered PF_INET6 protocol family Feb 8 23:37:56.027835 kernel: Segment Routing with IPv6 Feb 8 23:37:56.027848 kernel: In-situ OAM (IOAM) with IPv6 Feb 8 23:37:56.027861 kernel: NET: Registered PF_PACKET protocol family Feb 8 23:37:56.027874 kernel: Key type dns_resolver registered Feb 8 23:37:56.027890 kernel: IPI shorthand broadcast: enabled Feb 8 23:37:56.027903 kernel: sched_clock: Marking stable (695445800, 19987700)->(877353700, -161920200) Feb 8 23:37:56.027916 kernel: registered taskstats version 1 Feb 8 23:37:56.027929 kernel: Loading compiled-in X.509 certificates Feb 8 23:37:56.027942 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 5.15.148-flatcar: e9d857ae0e8100c174221878afd1046acbb054a6' Feb 8 23:37:56.027956 kernel: Key type .fscrypt registered Feb 8 23:37:56.027969 kernel: Key type fscrypt-provisioning registered Feb 8 23:37:56.027982 kernel: pstore: Using crash dump compression: deflate Feb 8 23:37:56.027998 kernel: ima: No TPM chip found, activating TPM-bypass! Feb 8 23:37:56.028011 kernel: ima: Allocated hash algorithm: sha1 Feb 8 23:37:56.028024 kernel: ima: No architecture policies found Feb 8 23:37:56.028037 kernel: Freeing unused kernel image (initmem) memory: 45496K Feb 8 23:37:56.028050 kernel: Write protecting the kernel read-only data: 28672k Feb 8 23:37:56.028063 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Feb 8 23:37:56.028077 kernel: Freeing unused kernel image (rodata/data gap) memory: 636K Feb 8 23:37:56.028090 kernel: Run /init as init process Feb 8 23:37:56.028103 kernel: with arguments: Feb 8 23:37:56.028116 kernel: /init Feb 8 23:37:56.028132 kernel: with environment: Feb 8 23:37:56.028145 kernel: HOME=/ Feb 8 23:37:56.028158 kernel: TERM=linux Feb 8 23:37:56.028170 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Feb 8 23:37:56.028187 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:37:56.028203 systemd[1]: Detected virtualization microsoft. Feb 8 23:37:56.028218 systemd[1]: Detected architecture x86-64. Feb 8 23:37:56.028233 systemd[1]: Running in initrd. Feb 8 23:37:56.028247 systemd[1]: No hostname configured, using default hostname. Feb 8 23:37:56.028260 systemd[1]: Hostname set to . Feb 8 23:37:56.028275 systemd[1]: Initializing machine ID from random generator. Feb 8 23:37:56.028289 systemd[1]: Queued start job for default target initrd.target. Feb 8 23:37:56.028303 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:37:56.028316 systemd[1]: Reached target cryptsetup.target. Feb 8 23:37:56.028330 systemd[1]: Reached target paths.target. Feb 8 23:37:56.028344 systemd[1]: Reached target slices.target. Feb 8 23:37:56.028360 systemd[1]: Reached target swap.target. Feb 8 23:37:56.028373 systemd[1]: Reached target timers.target. Feb 8 23:37:56.028388 systemd[1]: Listening on iscsid.socket. Feb 8 23:37:56.028401 systemd[1]: Listening on iscsiuio.socket. Feb 8 23:37:56.028416 systemd[1]: Listening on systemd-journald-audit.socket. Feb 8 23:37:56.028430 systemd[1]: Listening on systemd-journald-dev-log.socket. Feb 8 23:37:56.028444 systemd[1]: Listening on systemd-journald.socket. Feb 8 23:37:56.029508 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:37:56.029527 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:37:56.029541 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:37:56.029556 systemd[1]: Reached target sockets.target. Feb 8 23:37:56.029570 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:37:56.029584 systemd[1]: Finished network-cleanup.service. Feb 8 23:37:56.029598 systemd[1]: Starting systemd-fsck-usr.service... Feb 8 23:37:56.029612 systemd[1]: Starting systemd-journald.service... Feb 8 23:37:56.029626 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:37:56.029644 systemd[1]: Starting systemd-resolved.service... Feb 8 23:37:56.029657 systemd[1]: Starting systemd-vconsole-setup.service... Feb 8 23:37:56.029675 systemd-journald[183]: Journal started Feb 8 23:37:56.029741 systemd-journald[183]: Runtime Journal (/run/log/journal/bdc783907de74b83bb0d2d0bcccff36c) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:37:56.018494 systemd-modules-load[184]: Inserted module 'overlay' Feb 8 23:37:56.038479 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:37:56.045000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.059148 kernel: audit: type=1130 audit(1707435476.045:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.059176 systemd[1]: Started systemd-journald.service. Feb 8 23:37:56.059190 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Feb 8 23:37:56.069056 kernel: Bridge firewalling registered Feb 8 23:37:56.069044 systemd[1]: Finished systemd-fsck-usr.service. Feb 8 23:37:56.086583 kernel: audit: type=1130 audit(1707435476.068:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.068000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.082724 systemd-modules-load[184]: Inserted module 'br_netfilter' Feb 8 23:37:56.089242 systemd[1]: Finished systemd-vconsole-setup.service. Feb 8 23:37:56.094710 systemd[1]: Starting dracut-cmdline-ask.service... Feb 8 23:37:56.099624 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:37:56.105657 kernel: SCSI subsystem initialized Feb 8 23:37:56.115705 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:37:56.144638 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Feb 8 23:37:56.144697 kernel: audit: type=1130 audit(1707435476.088:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.144709 kernel: device-mapper: uevent: version 1.0.3 Feb 8 23:37:56.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.149414 systemd[1]: Finished dracut-cmdline-ask.service. Feb 8 23:37:56.157987 kernel: device-mapper: ioctl: 4.45.0-ioctl (2021-03-22) initialised: dm-devel@redhat.com Feb 8 23:37:56.151057 systemd-resolved[185]: Positive Trust Anchors: Feb 8 23:37:56.151067 systemd-resolved[185]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:37:56.151102 systemd-resolved[185]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:37:56.158922 systemd[1]: Starting dracut-cmdline.service... Feb 8 23:37:56.178990 dracut-cmdline[200]: dracut-dracut-053 Feb 8 23:37:56.180828 dracut-cmdline[200]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=ae7db544026ede4699ee2036449b75950d3fb7929b25a6731d0ad396f1aa37c9 Feb 8 23:37:56.193695 systemd-resolved[185]: Defaulting to hostname 'linux'. Feb 8 23:37:56.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.197338 systemd[1]: Started systemd-resolved.service. Feb 8 23:37:56.224126 kernel: audit: type=1130 audit(1707435476.093:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.224155 kernel: audit: type=1130 audit(1707435476.117:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.117000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.210517 systemd[1]: Reached target nss-lookup.target. Feb 8 23:37:56.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.209000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.246235 kernel: audit: type=1130 audit(1707435476.157:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.246266 kernel: audit: type=1130 audit(1707435476.209:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.246547 systemd-modules-load[184]: Inserted module 'dm_multipath' Feb 8 23:37:56.247361 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:37:56.254016 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:37:56.267826 kernel: audit: type=1130 audit(1707435476.252:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.275679 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:37:56.277000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.288491 kernel: audit: type=1130 audit(1707435476.277:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.324475 kernel: Loading iSCSI transport class v2.0-870. Feb 8 23:37:56.337480 kernel: iscsi: registered transport (tcp) Feb 8 23:37:56.361600 kernel: iscsi: registered transport (qla4xxx) Feb 8 23:37:56.361663 kernel: QLogic iSCSI HBA Driver Feb 8 23:37:56.390523 systemd[1]: Finished dracut-cmdline.service. Feb 8 23:37:56.392000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.393482 systemd[1]: Starting dracut-pre-udev.service... Feb 8 23:37:56.444491 kernel: raid6: avx512x4 gen() 18648 MB/s Feb 8 23:37:56.463474 kernel: raid6: avx512x4 xor() 8206 MB/s Feb 8 23:37:56.482479 kernel: raid6: avx512x2 gen() 18807 MB/s Feb 8 23:37:56.502470 kernel: raid6: avx512x2 xor() 29588 MB/s Feb 8 23:37:56.522465 kernel: raid6: avx512x1 gen() 18818 MB/s Feb 8 23:37:56.542465 kernel: raid6: avx512x1 xor() 26975 MB/s Feb 8 23:37:56.562470 kernel: raid6: avx2x4 gen() 18744 MB/s Feb 8 23:37:56.582468 kernel: raid6: avx2x4 xor() 8074 MB/s Feb 8 23:37:56.602468 kernel: raid6: avx2x2 gen() 18796 MB/s Feb 8 23:37:56.622472 kernel: raid6: avx2x2 xor() 22275 MB/s Feb 8 23:37:56.642468 kernel: raid6: avx2x1 gen() 14187 MB/s Feb 8 23:37:56.661470 kernel: raid6: avx2x1 xor() 19338 MB/s Feb 8 23:37:56.681469 kernel: raid6: sse2x4 gen() 11752 MB/s Feb 8 23:37:56.701467 kernel: raid6: sse2x4 xor() 7287 MB/s Feb 8 23:37:56.720464 kernel: raid6: sse2x2 gen() 12955 MB/s Feb 8 23:37:56.740467 kernel: raid6: sse2x2 xor() 7546 MB/s Feb 8 23:37:56.759468 kernel: raid6: sse2x1 gen() 11632 MB/s Feb 8 23:37:56.781942 kernel: raid6: sse2x1 xor() 5932 MB/s Feb 8 23:37:56.781995 kernel: raid6: using algorithm avx512x1 gen() 18818 MB/s Feb 8 23:37:56.782013 kernel: raid6: .... xor() 26975 MB/s, rmw enabled Feb 8 23:37:56.785051 kernel: raid6: using avx512x2 recovery algorithm Feb 8 23:37:56.804473 kernel: xor: automatically using best checksumming function avx Feb 8 23:37:56.899482 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Feb 8 23:37:56.907650 systemd[1]: Finished dracut-pre-udev.service. Feb 8 23:37:56.911000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.911000 audit: BPF prog-id=7 op=LOAD Feb 8 23:37:56.911000 audit: BPF prog-id=8 op=LOAD Feb 8 23:37:56.912517 systemd[1]: Starting systemd-udevd.service... Feb 8 23:37:56.925927 systemd-udevd[383]: Using default interface naming scheme 'v252'. Feb 8 23:37:56.930554 systemd[1]: Started systemd-udevd.service. Feb 8 23:37:56.937000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.940323 systemd[1]: Starting dracut-pre-trigger.service... Feb 8 23:37:56.955201 dracut-pre-trigger[404]: rd.md=0: removing MD RAID activation Feb 8 23:37:56.985538 systemd[1]: Finished dracut-pre-trigger.service. Feb 8 23:37:56.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:56.988417 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:37:57.022118 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:37:57.030000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:57.076476 kernel: cryptd: max_cpu_qlen set to 1000 Feb 8 23:37:57.093474 kernel: hv_vmbus: Vmbus version:5.2 Feb 8 23:37:57.117208 kernel: AVX2 version of gcm_enc/dec engaged. Feb 8 23:37:57.117258 kernel: AES CTR mode by8 optimization enabled Feb 8 23:37:57.132471 kernel: hv_vmbus: registering driver hv_storvsc Feb 8 23:37:57.139905 kernel: hv_vmbus: registering driver hyperv_keyboard Feb 8 23:37:57.139978 kernel: hid: raw HID events driver (C) Jiri Kosina Feb 8 23:37:57.140471 kernel: scsi host0: storvsc_host_t Feb 8 23:37:57.153444 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Feb 8 23:37:57.153485 kernel: hv_vmbus: registering driver hv_netvsc Feb 8 23:37:57.153498 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Feb 8 23:37:57.163467 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Feb 8 23:37:57.163522 kernel: scsi host1: storvsc_host_t Feb 8 23:37:57.180470 kernel: hv_vmbus: registering driver hid_hyperv Feb 8 23:37:57.190870 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Feb 8 23:37:57.190904 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Feb 8 23:37:57.191082 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Feb 8 23:37:57.191195 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Feb 8 23:37:57.198829 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Feb 8 23:37:57.217560 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Feb 8 23:37:57.217787 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Feb 8 23:37:57.221257 kernel: sd 0:0:0:0: [sda] Write Protect is off Feb 8 23:37:57.221485 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Feb 8 23:37:57.226473 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Feb 8 23:37:57.231470 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:37:57.236472 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Feb 8 23:37:57.334851 kernel: hv_netvsc 0022489b-2d15-0022-489b-2d150022489b eth0: VF slot 1 added Feb 8 23:37:57.345534 kernel: hv_vmbus: registering driver hv_pci Feb 8 23:37:57.345582 kernel: hv_pci bcbd9afb-49ac-42a0-9689-3e13a067ac27: PCI VMBus probing: Using version 0x10004 Feb 8 23:37:57.361013 kernel: hv_pci bcbd9afb-49ac-42a0-9689-3e13a067ac27: PCI host bridge to bus 49ac:00 Feb 8 23:37:57.361191 kernel: pci_bus 49ac:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Feb 8 23:37:57.361306 kernel: pci_bus 49ac:00: No busn resource found for root bus, will use [bus 00-ff] Feb 8 23:37:57.372507 kernel: pci 49ac:00:02.0: [15b3:1016] type 00 class 0x020000 Feb 8 23:37:57.381401 kernel: pci 49ac:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:37:57.396474 kernel: pci 49ac:00:02.0: enabling Extended Tags Feb 8 23:37:57.408487 kernel: pci 49ac:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 49ac:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Feb 8 23:37:57.416667 kernel: pci_bus 49ac:00: busn_res: [bus 00-ff] end is updated to 00 Feb 8 23:37:57.416836 kernel: pci 49ac:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Feb 8 23:37:57.506475 kernel: mlx5_core 49ac:00:02.0: firmware version: 14.30.1350 Feb 8 23:37:57.664477 kernel: mlx5_core 49ac:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Feb 8 23:37:57.695246 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device. Feb 8 23:37:57.782475 kernel: BTRFS: device label OEM devid 1 transid 12 /dev/sda6 scanned by (udev-worker) (441) Feb 8 23:37:57.795829 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:37:57.822836 kernel: mlx5_core 49ac:00:02.0: Supported tc offload range - chains: 1, prios: 1 Feb 8 23:37:57.823024 kernel: mlx5_core 49ac:00:02.0: mlx5e_tc_post_act_init:40:(pid 16): firmware level support is missing Feb 8 23:37:57.833921 kernel: hv_netvsc 0022489b-2d15-0022-489b-2d150022489b eth0: VF registering: eth1 Feb 8 23:37:57.834074 kernel: mlx5_core 49ac:00:02.0 eth1: joined to eth0 Feb 8 23:37:57.846472 kernel: mlx5_core 49ac:00:02.0 enP18860s1: renamed from eth1 Feb 8 23:37:57.896612 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device. Feb 8 23:37:57.903794 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device. Feb 8 23:37:57.912701 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device. Feb 8 23:37:57.915584 systemd[1]: Starting disk-uuid.service... Feb 8 23:37:58.935486 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Feb 8 23:37:58.936340 disk-uuid[559]: The operation has completed successfully. Feb 8 23:37:59.022716 systemd[1]: disk-uuid.service: Deactivated successfully. Feb 8 23:37:59.024000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.024000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.022815 systemd[1]: Finished disk-uuid.service. Feb 8 23:37:59.027336 systemd[1]: Starting verity-setup.service... Feb 8 23:37:59.065470 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Feb 8 23:37:59.257603 systemd[1]: Found device dev-mapper-usr.device. Feb 8 23:37:59.260898 systemd[1]: Mounting sysusr-usr.mount... Feb 8 23:37:59.268000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.266940 systemd[1]: Finished verity-setup.service. Feb 8 23:37:59.335487 kernel: EXT4-fs (dm-0): mounted filesystem without journal. Opts: norecovery. Quota mode: none. Feb 8 23:37:59.335106 systemd[1]: Mounted sysusr-usr.mount. Feb 8 23:37:59.339141 systemd[1]: afterburn-network-kargs.service was skipped because no trigger condition checks were met. Feb 8 23:37:59.343043 systemd[1]: Starting ignition-setup.service... Feb 8 23:37:59.347247 systemd[1]: Starting parse-ip-for-networkd.service... Feb 8 23:37:59.356494 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:37:59.356533 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:37:59.356553 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:37:59.412386 systemd[1]: Finished parse-ip-for-networkd.service. Feb 8 23:37:59.417000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.418000 audit: BPF prog-id=9 op=LOAD Feb 8 23:37:59.419387 systemd[1]: Starting systemd-networkd.service... Feb 8 23:37:59.443962 systemd-networkd[832]: lo: Link UP Feb 8 23:37:59.443972 systemd-networkd[832]: lo: Gained carrier Feb 8 23:37:59.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.444883 systemd-networkd[832]: Enumeration completed Feb 8 23:37:59.444956 systemd[1]: Started systemd-networkd.service. Feb 8 23:37:59.447643 systemd-networkd[832]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:37:59.460000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.448128 systemd[1]: Reached target network.target. Feb 8 23:37:59.469940 iscsid[838]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:37:59.469940 iscsid[838]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Feb 8 23:37:59.469940 iscsid[838]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Feb 8 23:37:59.469940 iscsid[838]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Feb 8 23:37:59.469940 iscsid[838]: If using hardware iscsi like qla4xxx this message can be ignored. Feb 8 23:37:59.469940 iscsid[838]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Feb 8 23:37:59.469940 iscsid[838]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Feb 8 23:37:59.471000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.453469 systemd[1]: Starting iscsiuio.service... Feb 8 23:37:59.460700 systemd[1]: Started iscsiuio.service. Feb 8 23:37:59.461852 systemd[1]: Starting iscsid.service... Feb 8 23:37:59.467220 systemd[1]: Started iscsid.service. Feb 8 23:37:59.473157 systemd[1]: Starting dracut-initqueue.service... Feb 8 23:37:59.475763 systemd[1]: mnt-oem.mount: Deactivated successfully. Feb 8 23:37:59.488538 systemd[1]: Finished dracut-initqueue.service. Feb 8 23:37:59.495238 systemd[1]: Reached target remote-fs-pre.target. Feb 8 23:37:59.499126 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:37:59.503756 systemd[1]: Reached target remote-fs.target. Feb 8 23:37:59.509312 systemd[1]: Starting dracut-pre-mount.service... Feb 8 23:37:59.532468 kernel: mlx5_core 49ac:00:02.0 enP18860s1: Link up Feb 8 23:37:59.535949 systemd[1]: Finished dracut-pre-mount.service. Feb 8 23:37:59.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.615484 kernel: hv_netvsc 0022489b-2d15-0022-489b-2d150022489b eth0: Data path switched to VF: enP18860s1 Feb 8 23:37:59.620875 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:37:59.620413 systemd-networkd[832]: enP18860s1: Link UP Feb 8 23:37:59.620546 systemd-networkd[832]: eth0: Link UP Feb 8 23:37:59.620740 systemd-networkd[832]: eth0: Gained carrier Feb 8 23:37:59.627640 systemd-networkd[832]: enP18860s1: Gained carrier Feb 8 23:37:59.655542 systemd-networkd[832]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:37:59.662773 systemd[1]: Finished ignition-setup.service. Feb 8 23:37:59.664000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:37:59.665609 systemd[1]: Starting ignition-fetch-offline.service... Feb 8 23:38:01.640705 systemd-networkd[832]: eth0: Gained IPv6LL Feb 8 23:38:02.866702 ignition[856]: Ignition 2.14.0 Feb 8 23:38:02.866718 ignition[856]: Stage: fetch-offline Feb 8 23:38:02.866810 ignition[856]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:02.866862 ignition[856]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:02.957024 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:02.957869 ignition[856]: parsed url from cmdline: "" Feb 8 23:38:02.957925 ignition[856]: no config URL provided Feb 8 23:38:02.958038 ignition[856]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:38:02.973000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:02.968506 systemd[1]: Finished ignition-fetch-offline.service. Feb 8 23:38:02.995227 kernel: kauditd_printk_skb: 18 callbacks suppressed Feb 8 23:38:02.995266 kernel: audit: type=1130 audit(1707435482.973:29): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:02.958291 ignition[856]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:38:02.975278 systemd[1]: Starting ignition-fetch.service... Feb 8 23:38:02.958410 ignition[856]: failed to fetch config: resource requires networking Feb 8 23:38:02.960674 ignition[856]: Ignition finished successfully Feb 8 23:38:02.983195 ignition[862]: Ignition 2.14.0 Feb 8 23:38:02.983201 ignition[862]: Stage: fetch Feb 8 23:38:02.983295 ignition[862]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:02.983319 ignition[862]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:02.986824 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:02.988638 ignition[862]: parsed url from cmdline: "" Feb 8 23:38:02.988647 ignition[862]: no config URL provided Feb 8 23:38:02.988665 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Feb 8 23:38:02.988679 ignition[862]: no config at "/usr/lib/ignition/user.ign" Feb 8 23:38:02.988749 ignition[862]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Feb 8 23:38:03.098973 ignition[862]: GET result: OK Feb 8 23:38:03.098998 ignition[862]: failed to retrieve userdata from IMDS, falling back to custom data: not a config (empty) Feb 8 23:38:03.255546 ignition[862]: opening config device: "/dev/sr0" Feb 8 23:38:03.256015 ignition[862]: getting drive status for "/dev/sr0" Feb 8 23:38:03.256144 ignition[862]: drive status: OK Feb 8 23:38:03.256594 ignition[862]: mounting config device Feb 8 23:38:03.256617 ignition[862]: op(1): [started] mounting "/dev/sr0" at "/tmp/ignition-azure159788813" Feb 8 23:38:03.284148 ignition[862]: op(1): [finished] mounting "/dev/sr0" at "/tmp/ignition-azure159788813" Feb 8 23:38:03.287178 kernel: UDF-fs: INFO Mounting volume 'UDF Volume', timestamp 2024/02/09 00:00 (1000) Feb 8 23:38:03.285127 ignition[862]: checking for config drive Feb 8 23:38:03.286211 systemd[1]: tmp-ignition\x2dazure159788813.mount: Deactivated successfully. Feb 8 23:38:03.285496 ignition[862]: reading config Feb 8 23:38:03.285863 ignition[862]: op(2): [started] unmounting "/dev/sr0" at "/tmp/ignition-azure159788813" Feb 8 23:38:03.287020 ignition[862]: op(2): [finished] unmounting "/dev/sr0" at "/tmp/ignition-azure159788813" Feb 8 23:38:03.287035 ignition[862]: config has been read from custom data Feb 8 23:38:03.287060 ignition[862]: parsing config with SHA512: 242301eb507b917dcdba0634e101dd5b4c608bdf2f9ea5bed7f4a7971bbd1a41b10865a6aae945c5ce13db39f14de5fd70446eda65910944f2c3ce11ccdb8869 Feb 8 23:38:03.305012 unknown[862]: fetched base config from "system" Feb 8 23:38:03.307246 unknown[862]: fetched base config from "system" Feb 8 23:38:03.307262 unknown[862]: fetched user config from "azure" Feb 8 23:38:03.326837 ignition[862]: fetch: fetch complete Feb 8 23:38:03.326848 ignition[862]: fetch: fetch passed Feb 8 23:38:03.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.328247 systemd[1]: Finished ignition-fetch.service. Feb 8 23:38:03.347521 kernel: audit: type=1130 audit(1707435483.330:30): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.326897 ignition[862]: Ignition finished successfully Feb 8 23:38:03.331958 systemd[1]: Starting ignition-kargs.service... Feb 8 23:38:03.355710 ignition[871]: Ignition 2.14.0 Feb 8 23:38:03.355720 ignition[871]: Stage: kargs Feb 8 23:38:03.355853 ignition[871]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:03.355886 ignition[871]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:03.365411 ignition[871]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:03.368917 ignition[871]: kargs: kargs passed Feb 8 23:38:03.368973 ignition[871]: Ignition finished successfully Feb 8 23:38:03.372597 systemd[1]: Finished ignition-kargs.service. Feb 8 23:38:03.375246 systemd[1]: Starting ignition-disks.service... Feb 8 23:38:03.374000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.391021 ignition[877]: Ignition 2.14.0 Feb 8 23:38:03.393560 kernel: audit: type=1130 audit(1707435483.374:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.391711 ignition[877]: Stage: disks Feb 8 23:38:03.394538 ignition[877]: reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:03.394557 ignition[877]: parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:03.397022 ignition[877]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:03.399502 ignition[877]: disks: disks passed Feb 8 23:38:03.400123 systemd[1]: Finished ignition-disks.service. Feb 8 23:38:03.399539 ignition[877]: Ignition finished successfully Feb 8 23:38:03.405000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.414953 systemd[1]: Reached target initrd-root-device.target. Feb 8 23:38:03.420746 kernel: audit: type=1130 audit(1707435483.405:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.420741 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:38:03.422641 systemd[1]: Reached target local-fs.target. Feb 8 23:38:03.426450 systemd[1]: Reached target sysinit.target. Feb 8 23:38:03.428220 systemd[1]: Reached target basic.target. Feb 8 23:38:03.432651 systemd[1]: Starting systemd-fsck-root.service... Feb 8 23:38:03.493071 systemd-fsck[885]: ROOT: clean, 602/7326000 files, 481070/7359488 blocks Feb 8 23:38:03.498420 systemd[1]: Finished systemd-fsck-root.service. Feb 8 23:38:03.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.503419 systemd[1]: Mounting sysroot.mount... Feb 8 23:38:03.516273 kernel: audit: type=1130 audit(1707435483.501:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:03.526528 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Opts: (null). Quota mode: none. Feb 8 23:38:03.526886 systemd[1]: Mounted sysroot.mount. Feb 8 23:38:03.530121 systemd[1]: Reached target initrd-root-fs.target. Feb 8 23:38:03.565165 systemd[1]: Mounting sysroot-usr.mount... Feb 8 23:38:03.571036 systemd[1]: Starting flatcar-metadata-hostname.service... Feb 8 23:38:03.575506 systemd[1]: ignition-remount-sysroot.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Feb 8 23:38:03.575546 systemd[1]: Reached target ignition-diskful.target. Feb 8 23:38:03.583781 systemd[1]: Mounted sysroot-usr.mount. Feb 8 23:38:03.631865 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:38:03.637027 systemd[1]: Starting initrd-setup-root.service... Feb 8 23:38:03.649479 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (895) Feb 8 23:38:03.649515 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:38:03.657329 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:38:03.657357 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:38:03.660299 initrd-setup-root[900]: cut: /sysroot/etc/passwd: No such file or directory Feb 8 23:38:03.667405 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:38:03.683559 initrd-setup-root[926]: cut: /sysroot/etc/group: No such file or directory Feb 8 23:38:03.687864 initrd-setup-root[934]: cut: /sysroot/etc/shadow: No such file or directory Feb 8 23:38:03.708805 initrd-setup-root[942]: cut: /sysroot/etc/gshadow: No such file or directory Feb 8 23:38:04.153400 systemd[1]: Finished initrd-setup-root.service. Feb 8 23:38:04.157000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.159067 systemd[1]: Starting ignition-mount.service... Feb 8 23:38:04.174067 kernel: audit: type=1130 audit(1707435484.157:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.174753 systemd[1]: Starting sysroot-boot.service... Feb 8 23:38:04.194440 systemd[1]: Finished sysroot-boot.service. Feb 8 23:38:04.209925 kernel: audit: type=1130 audit(1707435484.195:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.195000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.210009 ignition[962]: INFO : Ignition 2.14.0 Feb 8 23:38:04.210009 ignition[962]: INFO : Stage: mount Feb 8 23:38:04.210009 ignition[962]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:04.210009 ignition[962]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:04.210009 ignition[962]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:04.224292 ignition[962]: INFO : mount: mount passed Feb 8 23:38:04.226094 ignition[962]: INFO : Ignition finished successfully Feb 8 23:38:04.228788 systemd[1]: Finished ignition-mount.service. Feb 8 23:38:04.231000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.244474 kernel: audit: type=1130 audit(1707435484.231:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:04.286305 systemd[1]: sysusr-usr-share-oem.mount: Deactivated successfully. Feb 8 23:38:04.286416 systemd[1]: sysroot-usr-share-oem.mount: Deactivated successfully. Feb 8 23:38:04.905007 coreos-metadata[894]: Feb 08 23:38:04.904 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:38:04.930431 coreos-metadata[894]: Feb 08 23:38:04.930 INFO Fetch successful Feb 8 23:38:04.964990 coreos-metadata[894]: Feb 08 23:38:04.964 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:38:04.982843 coreos-metadata[894]: Feb 08 23:38:04.982 INFO Fetch successful Feb 8 23:38:05.001352 coreos-metadata[894]: Feb 08 23:38:05.001 INFO wrote hostname ci-3510.3.2-a-3441531bae to /sysroot/etc/hostname Feb 8 23:38:05.007000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:05.003275 systemd[1]: Finished flatcar-metadata-hostname.service. Feb 8 23:38:05.028589 kernel: audit: type=1130 audit(1707435485.007:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:05.009662 systemd[1]: Starting ignition-files.service... Feb 8 23:38:05.035403 systemd[1]: Mounting sysroot-usr-share-oem.mount... Feb 8 23:38:05.049790 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (973) Feb 8 23:38:05.049823 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Feb 8 23:38:05.049837 kernel: BTRFS info (device sda6): using free space tree Feb 8 23:38:05.053255 kernel: BTRFS info (device sda6): has skinny extents Feb 8 23:38:05.061276 systemd[1]: Mounted sysroot-usr-share-oem.mount. Feb 8 23:38:05.074035 ignition[992]: INFO : Ignition 2.14.0 Feb 8 23:38:05.074035 ignition[992]: INFO : Stage: files Feb 8 23:38:05.077440 ignition[992]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:05.077440 ignition[992]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:05.090672 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:05.119905 ignition[992]: DEBUG : files: compiled without relabeling support, skipping Feb 8 23:38:05.123386 ignition[992]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Feb 8 23:38:05.123386 ignition[992]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Feb 8 23:38:05.198118 ignition[992]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Feb 8 23:38:05.202051 ignition[992]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Feb 8 23:38:05.222012 unknown[992]: wrote ssh authorized keys file for user: core Feb 8 23:38:05.226346 ignition[992]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Feb 8 23:38:05.258646 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:38:05.266683 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://github.com/containernetworking/plugins/releases/download/v1.3.0/cni-plugins-linux-amd64-v1.3.0.tgz: attempt #1 Feb 8 23:38:05.916837 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Feb 8 23:38:06.128911 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(3): file matches expected sum of: 5d0324ca8a3c90c680b6e1fddb245a2255582fa15949ba1f3c6bb7323df9d3af754dae98d6e40ac9ccafb2999c932df2c4288d418949a4915d928eb23c090540 Feb 8 23:38:06.136547 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/cni-plugins-linux-amd64-v1.3.0.tgz" Feb 8 23:38:06.136547 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:38:06.136547 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://github.com/kubernetes-sigs/cri-tools/releases/download/v1.27.0/crictl-v1.27.0-linux-amd64.tar.gz: attempt #1 Feb 8 23:38:06.633124 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Feb 8 23:38:06.733871 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(4): file matches expected sum of: aa622325bf05520939f9e020d7a28ab48ac23e2fae6f47d5a4e52174c88c1ebc31b464853e4fd65bd8f5331f330a6ca96fd370d247d3eeaed042da4ee2d1219a Feb 8 23:38:06.742079 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/crictl-v1.27.0-linux-amd64.tar.gz" Feb 8 23:38:06.742079 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:38:06.742079 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubeadm: attempt #1 Feb 8 23:38:06.952578 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): GET result: OK Feb 8 23:38:07.197065 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(5): file matches expected sum of: f4daad200c8378dfdc6cb69af28eaca4215f2b4a2dbdf75f29f9210171cb5683bc873fc000319022e6b3ad61175475d77190734713ba9136644394e8a8faafa1 Feb 8 23:38:07.204028 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/opt/bin/kubeadm" Feb 8 23:38:07.204028 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:38:07.204028 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET https://dl.k8s.io/release/v1.28.1/bin/linux/amd64/kubelet: attempt #1 Feb 8 23:38:07.336753 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): GET result: OK Feb 8 23:38:07.778870 ignition[992]: DEBUG : files: createFilesystemsFiles: createFiles: op(6): file matches expected sum of: ce6ba764274162d38ac1c44e1fb1f0f835346f3afc5b508bb755b1b7d7170910f5812b0a1941b32e29d950e905bbd08ae761c87befad921db4d44969c8562e75 Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/opt/bin/kubelet" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/install.sh" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/install.sh" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/docker/daemon.json" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:38:07.786951 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:38:07.828183 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(b): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2746887111" Feb 8 23:38:07.837444 kernel: BTRFS info: devid 1 device path /dev/sda6 changed to /dev/disk/by-label/OEM scanned by ignition (997) Feb 8 23:38:07.837481 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(a): op(b): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2746887111": device or resource busy Feb 8 23:38:07.837481 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(a): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem2746887111", trying btrfs: device or resource busy Feb 8 23:38:07.837481 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2746887111" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(c): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem2746887111" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [started] unmounting "/mnt/oem2746887111" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): op(d): [finished] unmounting "/mnt/oem2746887111" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/etc/systemd/system/waagent.service" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): [started] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:38:07.852099 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): oem config not found in "/usr/share/oem", looking on oem partition Feb 8 23:38:07.884020 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(f): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem355005651" Feb 8 23:38:07.888157 ignition[992]: CRITICAL : files: createFilesystemsFiles: createFiles: op(e): op(f): [failed] mounting "/dev/disk/by-label/OEM" at "/mnt/oem355005651": device or resource busy Feb 8 23:38:07.888157 ignition[992]: ERROR : files: createFilesystemsFiles: createFiles: op(e): failed to mount ext4 device "/dev/disk/by-label/OEM" at "/mnt/oem355005651", trying btrfs: device or resource busy Feb 8 23:38:07.888157 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [started] mounting "/dev/disk/by-label/OEM" at "/mnt/oem355005651" Feb 8 23:38:07.902389 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(10): [finished] mounting "/dev/disk/by-label/OEM" at "/mnt/oem355005651" Feb 8 23:38:07.902389 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [started] unmounting "/mnt/oem355005651" Feb 8 23:38:07.910789 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): op(11): [finished] unmounting "/mnt/oem355005651" Feb 8 23:38:07.910789 ignition[992]: INFO : files: createFilesystemsFiles: createFiles: op(e): [finished] writing file "/sysroot/etc/systemd/system/nvidia.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(12): [started] processing unit "waagent.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(12): [finished] processing unit "waagent.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(13): [started] processing unit "nvidia.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(13): [finished] processing unit "nvidia.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(14): [started] processing unit "prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(14): op(15): [started] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(14): op(15): [finished] writing unit "prepare-critools.service" at "/sysroot/etc/systemd/system/prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(14): [finished] processing unit "prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(16): [started] processing unit "prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(16): op(17): [started] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(16): op(17): [finished] writing unit "prepare-cni-plugins.service" at "/sysroot/etc/systemd/system/prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(16): [finished] processing unit "prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(18): [started] setting preset to enabled for "prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(18): [finished] setting preset to enabled for "prepare-critools.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(19): [started] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(19): [finished] setting preset to enabled for "prepare-cni-plugins.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(1a): [started] setting preset to enabled for "waagent.service" Feb 8 23:38:07.910789 ignition[992]: INFO : files: op(1a): [finished] setting preset to enabled for "waagent.service" Feb 8 23:38:07.978766 ignition[992]: INFO : files: op(1b): [started] setting preset to enabled for "nvidia.service" Feb 8 23:38:07.978766 ignition[992]: INFO : files: op(1b): [finished] setting preset to enabled for "nvidia.service" Feb 8 23:38:07.978766 ignition[992]: INFO : files: createResultFile: createFiles: op(1c): [started] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:38:07.978766 ignition[992]: INFO : files: createResultFile: createFiles: op(1c): [finished] writing file "/sysroot/etc/.ignition-result.json" Feb 8 23:38:07.978766 ignition[992]: INFO : files: files passed Feb 8 23:38:07.978766 ignition[992]: INFO : Ignition finished successfully Feb 8 23:38:07.979063 systemd[1]: Finished ignition-files.service. Feb 8 23:38:08.016465 kernel: audit: type=1130 audit(1707435487.998:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:07.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.010127 systemd[1]: Starting initrd-setup-root-after-ignition.service... Feb 8 23:38:08.012548 systemd[1]: torcx-profile-populate.service was skipped because of an unmet condition check (ConditionPathExists=/sysroot/etc/torcx/next-profile). Feb 8 23:38:08.023000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.013484 systemd[1]: Starting ignition-quench.service... Feb 8 23:38:08.023000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.020188 systemd[1]: ignition-quench.service: Deactivated successfully. Feb 8 23:38:08.048350 kernel: audit: type=1130 audit(1707435488.023:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.048370 kernel: audit: type=1131 audit(1707435488.023:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.020274 systemd[1]: Finished ignition-quench.service. Feb 8 23:38:08.051597 initrd-setup-root-after-ignition[1017]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Feb 8 23:38:08.052112 systemd[1]: Finished initrd-setup-root-after-ignition.service. Feb 8 23:38:08.057000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.057614 systemd[1]: Reached target ignition-complete.target. Feb 8 23:38:08.075184 kernel: audit: type=1130 audit(1707435488.057:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.072274 systemd[1]: Starting initrd-parse-etc.service... Feb 8 23:38:08.088478 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Feb 8 23:38:08.090795 systemd[1]: Finished initrd-parse-etc.service. Feb 8 23:38:08.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.094607 systemd[1]: Reached target initrd-fs.target. Feb 8 23:38:08.118507 kernel: audit: type=1130 audit(1707435488.094:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.118537 kernel: audit: type=1131 audit(1707435488.094:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.094000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.115048 systemd[1]: Reached target initrd.target. Feb 8 23:38:08.118426 systemd[1]: dracut-mount.service was skipped because no trigger condition checks were met. Feb 8 23:38:08.120173 systemd[1]: Starting dracut-pre-pivot.service... Feb 8 23:38:08.132928 systemd[1]: Finished dracut-pre-pivot.service. Feb 8 23:38:08.148814 kernel: audit: type=1130 audit(1707435488.136:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.136000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.148533 systemd[1]: Starting initrd-cleanup.service... Feb 8 23:38:08.160997 systemd[1]: Stopped target nss-lookup.target. Feb 8 23:38:08.164757 systemd[1]: Stopped target remote-cryptsetup.target. Feb 8 23:38:08.168799 systemd[1]: Stopped target timers.target. Feb 8 23:38:08.172174 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Feb 8 23:38:08.174372 systemd[1]: Stopped dracut-pre-pivot.service. Feb 8 23:38:08.177000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.178158 systemd[1]: Stopped target initrd.target. Feb 8 23:38:08.191844 kernel: audit: type=1131 audit(1707435488.177:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.192027 systemd[1]: Stopped target basic.target. Feb 8 23:38:08.195587 systemd[1]: Stopped target ignition-complete.target. Feb 8 23:38:08.199724 systemd[1]: Stopped target ignition-diskful.target. Feb 8 23:38:08.203649 systemd[1]: Stopped target initrd-root-device.target. Feb 8 23:38:08.207643 systemd[1]: Stopped target remote-fs.target. Feb 8 23:38:08.211533 systemd[1]: Stopped target remote-fs-pre.target. Feb 8 23:38:08.215404 systemd[1]: Stopped target sysinit.target. Feb 8 23:38:08.218886 systemd[1]: Stopped target local-fs.target. Feb 8 23:38:08.222552 systemd[1]: Stopped target local-fs-pre.target. Feb 8 23:38:08.226327 systemd[1]: Stopped target swap.target. Feb 8 23:38:08.229564 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Feb 8 23:38:08.231772 systemd[1]: Stopped dracut-pre-mount.service. Feb 8 23:38:08.235000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.235482 systemd[1]: Stopped target cryptsetup.target. Feb 8 23:38:08.249892 kernel: audit: type=1131 audit(1707435488.235:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.249991 systemd[1]: dracut-initqueue.service: Deactivated successfully. Feb 8 23:38:08.252158 systemd[1]: Stopped dracut-initqueue.service. Feb 8 23:38:08.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.255705 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Feb 8 23:38:08.271560 kernel: audit: type=1131 audit(1707435488.255:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.255844 systemd[1]: Stopped initrd-setup-root-after-ignition.service. Feb 8 23:38:08.273000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.273779 systemd[1]: ignition-files.service: Deactivated successfully. Feb 8 23:38:08.274535 systemd[1]: Stopped ignition-files.service. Feb 8 23:38:08.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.278120 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Feb 8 23:38:08.278221 systemd[1]: Stopped flatcar-metadata-hostname.service. Feb 8 23:38:08.283301 systemd[1]: Stopping ignition-mount.service... Feb 8 23:38:08.282000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.292001 systemd[1]: Stopping sysroot-boot.service... Feb 8 23:38:08.296261 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Feb 8 23:38:08.298914 systemd[1]: Stopped systemd-udev-trigger.service. Feb 8 23:38:08.303472 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Feb 8 23:38:08.305796 ignition[1030]: INFO : Ignition 2.14.0 Feb 8 23:38:08.305796 ignition[1030]: INFO : Stage: umount Feb 8 23:38:08.305796 ignition[1030]: INFO : reading system config file "/usr/lib/ignition/base.d/base.ign" Feb 8 23:38:08.305796 ignition[1030]: DEBUG : parsing config with SHA512: 4824fd4a4e57848da530dc2b56e2d3e9f5f19634d1c84ef29f8fc49255520728d0377a861a375d7c8cb5301ed861ff4ede4b440b074b1d6a86e23be9cefc2f63 Feb 8 23:38:08.317704 ignition[1030]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Feb 8 23:38:08.317704 ignition[1030]: INFO : umount: umount passed Feb 8 23:38:08.317704 ignition[1030]: INFO : Ignition finished successfully Feb 8 23:38:08.302000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.322000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.328000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.312620 systemd[1]: Stopped dracut-pre-trigger.service. Feb 8 23:38:08.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.332000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.324241 systemd[1]: ignition-mount.service: Deactivated successfully. Feb 8 23:38:08.324354 systemd[1]: Stopped ignition-mount.service. Feb 8 23:38:08.336000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.329636 systemd[1]: initrd-cleanup.service: Deactivated successfully. Feb 8 23:38:08.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.329723 systemd[1]: Finished initrd-cleanup.service. Feb 8 23:38:08.333182 systemd[1]: ignition-disks.service: Deactivated successfully. Feb 8 23:38:08.333226 systemd[1]: Stopped ignition-disks.service. Feb 8 23:38:08.350000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.336552 systemd[1]: ignition-kargs.service: Deactivated successfully. Feb 8 23:38:08.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.336598 systemd[1]: Stopped ignition-kargs.service. Feb 8 23:38:08.342295 systemd[1]: ignition-fetch.service: Deactivated successfully. Feb 8 23:38:08.342341 systemd[1]: Stopped ignition-fetch.service. Feb 8 23:38:08.350086 systemd[1]: Stopped target network.target. Feb 8 23:38:08.351761 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Feb 8 23:38:08.378000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.351804 systemd[1]: Stopped ignition-fetch-offline.service. Feb 8 23:38:08.353737 systemd[1]: Stopped target paths.target. Feb 8 23:38:08.358061 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Feb 8 23:38:08.362514 systemd[1]: Stopped systemd-ask-password-console.path. Feb 8 23:38:08.391000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.367512 systemd[1]: Stopped target slices.target. Feb 8 23:38:08.370777 systemd[1]: Stopped target sockets.target. Feb 8 23:38:08.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.372593 systemd[1]: iscsid.socket: Deactivated successfully. Feb 8 23:38:08.372627 systemd[1]: Closed iscsid.socket. Feb 8 23:38:08.404000 audit: BPF prog-id=6 op=UNLOAD Feb 8 23:38:08.376520 systemd[1]: iscsiuio.socket: Deactivated successfully. Feb 8 23:38:08.376552 systemd[1]: Closed iscsiuio.socket. Feb 8 23:38:08.378077 systemd[1]: ignition-setup.service: Deactivated successfully. Feb 8 23:38:08.412000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.378115 systemd[1]: Stopped ignition-setup.service. Feb 8 23:38:08.418000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.380107 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:38:08.383643 systemd[1]: Stopping systemd-resolved.service... Feb 8 23:38:08.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.387501 systemd-networkd[832]: eth0: DHCPv6 lease lost Feb 8 23:38:08.422000 audit: BPF prog-id=9 op=UNLOAD Feb 8 23:38:08.390428 systemd[1]: sysroot-boot.mount: Deactivated successfully. Feb 8 23:38:08.390894 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:38:08.390995 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:38:08.395656 systemd[1]: systemd-resolved.service: Deactivated successfully. Feb 8 23:38:08.395752 systemd[1]: Stopped systemd-resolved.service. Feb 8 23:38:08.401373 systemd[1]: systemd-networkd.socket: Deactivated successfully. Feb 8 23:38:08.401415 systemd[1]: Closed systemd-networkd.socket. Feb 8 23:38:08.405247 systemd[1]: Stopping network-cleanup.service... Feb 8 23:38:08.408870 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Feb 8 23:38:08.408931 systemd[1]: Stopped parse-ip-for-networkd.service. Feb 8 23:38:08.414770 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:38:08.414837 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:38:08.418608 systemd[1]: systemd-modules-load.service: Deactivated successfully. Feb 8 23:38:08.418657 systemd[1]: Stopped systemd-modules-load.service. Feb 8 23:38:08.425702 systemd[1]: Stopping systemd-udevd.service... Feb 8 23:38:08.434571 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Feb 8 23:38:08.459000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.444426 systemd[1]: systemd-udevd.service: Deactivated successfully. Feb 8 23:38:08.456011 systemd[1]: Stopped systemd-udevd.service. Feb 8 23:38:08.465230 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Feb 8 23:38:08.475000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.465300 systemd[1]: Closed systemd-udevd-control.socket. Feb 8 23:38:08.477000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.480000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.469401 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Feb 8 23:38:08.469437 systemd[1]: Closed systemd-udevd-kernel.socket. Feb 8 23:38:08.473425 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Feb 8 23:38:08.473473 systemd[1]: Stopped dracut-pre-udev.service. Feb 8 23:38:08.475278 systemd[1]: dracut-cmdline.service: Deactivated successfully. Feb 8 23:38:08.492000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.475324 systemd[1]: Stopped dracut-cmdline.service. Feb 8 23:38:08.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.477096 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Feb 8 23:38:08.501000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.477136 systemd[1]: Stopped dracut-cmdline-ask.service. Feb 8 23:38:08.507000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.507000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.481377 systemd[1]: Starting initrd-udevadm-cleanup-db.service... Feb 8 23:38:08.490901 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Feb 8 23:38:08.490967 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service. Feb 8 23:38:08.495249 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Feb 8 23:38:08.495297 systemd[1]: Stopped kmod-static-nodes.service. Feb 8 23:38:08.499047 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Feb 8 23:38:08.499094 systemd[1]: Stopped systemd-vconsole-setup.service. Feb 8 23:38:08.503648 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Feb 8 23:38:08.503735 systemd[1]: Finished initrd-udevadm-cleanup-db.service. Feb 8 23:38:08.538523 kernel: hv_netvsc 0022489b-2d15-0022-489b-2d150022489b eth0: Data path switched from VF: enP18860s1 Feb 8 23:38:08.559247 systemd[1]: network-cleanup.service: Deactivated successfully. Feb 8 23:38:08.564000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:08.559376 systemd[1]: Stopped network-cleanup.service. Feb 8 23:38:08.843680 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Feb 8 23:38:09.263160 systemd[1]: sysroot-boot.service: Deactivated successfully. Feb 8 23:38:09.265000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:09.272000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:09.263299 systemd[1]: Stopped sysroot-boot.service. Feb 8 23:38:09.265898 systemd[1]: Reached target initrd-switch-root.target. Feb 8 23:38:09.270863 systemd[1]: initrd-setup-root.service: Deactivated successfully. Feb 8 23:38:09.270921 systemd[1]: Stopped initrd-setup-root.service. Feb 8 23:38:09.273896 systemd[1]: Starting initrd-switch-root.service... Feb 8 23:38:09.288351 systemd[1]: Switching root. Feb 8 23:38:09.322006 systemd-journald[183]: Received SIGTERM from PID 1 (systemd). Feb 8 23:38:09.322074 iscsid[838]: iscsid shutting down. Feb 8 23:38:09.323937 systemd-journald[183]: Journal stopped Feb 8 23:38:23.294428 kernel: SELinux: Class mctp_socket not defined in policy. Feb 8 23:38:23.294464 kernel: SELinux: Class anon_inode not defined in policy. Feb 8 23:38:23.294476 kernel: SELinux: the above unknown classes and permissions will be allowed Feb 8 23:38:23.294487 kernel: SELinux: policy capability network_peer_controls=1 Feb 8 23:38:23.294495 kernel: SELinux: policy capability open_perms=1 Feb 8 23:38:23.294506 kernel: SELinux: policy capability extended_socket_class=1 Feb 8 23:38:23.294515 kernel: SELinux: policy capability always_check_network=0 Feb 8 23:38:23.294527 kernel: SELinux: policy capability cgroup_seclabel=1 Feb 8 23:38:23.294536 kernel: SELinux: policy capability nnp_nosuid_transition=1 Feb 8 23:38:23.294546 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Feb 8 23:38:23.294556 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Feb 8 23:38:23.294568 systemd[1]: Successfully loaded SELinux policy in 300.535ms. Feb 8 23:38:23.294580 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 25.743ms. Feb 8 23:38:23.294592 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL -ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Feb 8 23:38:23.294607 systemd[1]: Detected virtualization microsoft. Feb 8 23:38:23.294619 systemd[1]: Detected architecture x86-64. Feb 8 23:38:23.294629 systemd[1]: Detected first boot. Feb 8 23:38:23.294641 systemd[1]: Hostname set to . Feb 8 23:38:23.294653 systemd[1]: Initializing machine ID from random generator. Feb 8 23:38:23.294666 kernel: SELinux: Context system_u:object_r:container_file_t:s0:c1022,c1023 is not valid (left unmapped). Feb 8 23:38:23.294676 kernel: kauditd_printk_skb: 40 callbacks suppressed Feb 8 23:38:23.294687 kernel: audit: type=1400 audit(1707435494.085:88): avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:38:23.294699 kernel: audit: type=1300 audit(1707435494.085:88): arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:23.294711 kernel: audit: type=1327 audit(1707435494.085:88): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:23.294725 kernel: audit: type=1400 audit(1707435494.092:89): avc: denied { associate } for pid=1064 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:38:23.294736 kernel: audit: type=1300 audit(1707435494.092:89): arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:23.294750 kernel: audit: type=1307 audit(1707435494.092:89): cwd="/" Feb 8 23:38:23.294762 kernel: audit: type=1302 audit(1707435494.092:89): item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:23.294776 kernel: audit: type=1302 audit(1707435494.092:89): item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:23.294789 kernel: audit: type=1327 audit(1707435494.092:89): proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:23.294804 systemd[1]: Populated /etc with preset unit settings. Feb 8 23:38:23.294816 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:38:23.294828 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:38:23.294840 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:38:23.294853 kernel: audit: type=1334 audit(1707435502.821:90): prog-id=12 op=LOAD Feb 8 23:38:23.294863 kernel: audit: type=1334 audit(1707435502.821:91): prog-id=3 op=UNLOAD Feb 8 23:38:23.294874 kernel: audit: type=1334 audit(1707435502.827:92): prog-id=13 op=LOAD Feb 8 23:38:23.294883 kernel: audit: type=1334 audit(1707435502.832:93): prog-id=14 op=LOAD Feb 8 23:38:23.294896 kernel: audit: type=1334 audit(1707435502.832:94): prog-id=4 op=UNLOAD Feb 8 23:38:23.294907 kernel: audit: type=1334 audit(1707435502.832:95): prog-id=5 op=UNLOAD Feb 8 23:38:23.294922 kernel: audit: type=1334 audit(1707435502.837:96): prog-id=15 op=LOAD Feb 8 23:38:23.294934 kernel: audit: type=1334 audit(1707435502.837:97): prog-id=12 op=UNLOAD Feb 8 23:38:23.294946 kernel: audit: type=1334 audit(1707435502.851:98): prog-id=16 op=LOAD Feb 8 23:38:23.294956 kernel: audit: type=1334 audit(1707435502.860:99): prog-id=17 op=LOAD Feb 8 23:38:23.294968 systemd[1]: iscsiuio.service: Deactivated successfully. Feb 8 23:38:23.294980 systemd[1]: Stopped iscsiuio.service. Feb 8 23:38:23.294995 systemd[1]: iscsid.service: Deactivated successfully. Feb 8 23:38:23.295008 systemd[1]: Stopped iscsid.service. Feb 8 23:38:23.295020 systemd[1]: initrd-switch-root.service: Deactivated successfully. Feb 8 23:38:23.295032 systemd[1]: Stopped initrd-switch-root.service. Feb 8 23:38:23.295047 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Feb 8 23:38:23.295063 systemd[1]: Created slice system-addon\x2dconfig.slice. Feb 8 23:38:23.295078 systemd[1]: Created slice system-addon\x2drun.slice. Feb 8 23:38:23.295093 systemd[1]: Created slice system-getty.slice. Feb 8 23:38:23.295108 systemd[1]: Created slice system-modprobe.slice. Feb 8 23:38:23.295128 systemd[1]: Created slice system-serial\x2dgetty.slice. Feb 8 23:38:23.295146 systemd[1]: Created slice system-system\x2dcloudinit.slice. Feb 8 23:38:23.295163 systemd[1]: Created slice system-systemd\x2dfsck.slice. Feb 8 23:38:23.295177 systemd[1]: Created slice user.slice. Feb 8 23:38:23.295194 systemd[1]: Started systemd-ask-password-console.path. Feb 8 23:38:23.295210 systemd[1]: Started systemd-ask-password-wall.path. Feb 8 23:38:23.295226 systemd[1]: Set up automount boot.automount. Feb 8 23:38:23.295241 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount. Feb 8 23:38:23.295260 systemd[1]: Stopped target initrd-switch-root.target. Feb 8 23:38:23.295275 systemd[1]: Stopped target initrd-fs.target. Feb 8 23:38:23.295289 systemd[1]: Stopped target initrd-root-fs.target. Feb 8 23:38:23.295305 systemd[1]: Reached target integritysetup.target. Feb 8 23:38:23.295336 systemd[1]: Reached target remote-cryptsetup.target. Feb 8 23:38:23.295350 systemd[1]: Reached target remote-fs.target. Feb 8 23:38:23.295362 systemd[1]: Reached target slices.target. Feb 8 23:38:23.295373 systemd[1]: Reached target swap.target. Feb 8 23:38:23.295388 systemd[1]: Reached target torcx.target. Feb 8 23:38:23.295402 systemd[1]: Reached target veritysetup.target. Feb 8 23:38:23.295414 systemd[1]: Listening on systemd-coredump.socket. Feb 8 23:38:23.295425 systemd[1]: Listening on systemd-initctl.socket. Feb 8 23:38:23.295438 systemd[1]: Listening on systemd-networkd.socket. Feb 8 23:38:23.295489 systemd[1]: Listening on systemd-udevd-control.socket. Feb 8 23:38:23.295502 systemd[1]: Listening on systemd-udevd-kernel.socket. Feb 8 23:38:23.295514 systemd[1]: Listening on systemd-userdbd.socket. Feb 8 23:38:23.295526 systemd[1]: Mounting dev-hugepages.mount... Feb 8 23:38:23.295538 systemd[1]: Mounting dev-mqueue.mount... Feb 8 23:38:23.295549 systemd[1]: Mounting media.mount... Feb 8 23:38:23.295561 systemd[1]: proc-xen.mount was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:38:23.295573 systemd[1]: Mounting sys-kernel-debug.mount... Feb 8 23:38:23.295584 systemd[1]: Mounting sys-kernel-tracing.mount... Feb 8 23:38:23.295598 systemd[1]: Mounting tmp.mount... Feb 8 23:38:23.295609 systemd[1]: Starting flatcar-tmpfiles.service... Feb 8 23:38:23.295621 systemd[1]: ignition-delete-config.service was skipped because no trigger condition checks were met. Feb 8 23:38:23.295633 systemd[1]: Starting kmod-static-nodes.service... Feb 8 23:38:23.295645 systemd[1]: Starting modprobe@configfs.service... Feb 8 23:38:23.295656 systemd[1]: Starting modprobe@dm_mod.service... Feb 8 23:38:23.295668 systemd[1]: Starting modprobe@drm.service... Feb 8 23:38:23.295680 systemd[1]: Starting modprobe@efi_pstore.service... Feb 8 23:38:23.295692 systemd[1]: Starting modprobe@fuse.service... Feb 8 23:38:23.295705 systemd[1]: Starting modprobe@loop.service... Feb 8 23:38:23.295718 systemd[1]: setup-nsswitch.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Feb 8 23:38:23.295731 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Feb 8 23:38:23.295741 systemd[1]: Stopped systemd-fsck-root.service. Feb 8 23:38:23.295753 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Feb 8 23:38:23.295766 systemd[1]: Stopped systemd-fsck-usr.service. Feb 8 23:38:23.295777 systemd[1]: Stopped systemd-journald.service. Feb 8 23:38:23.295789 systemd[1]: Starting systemd-journald.service... Feb 8 23:38:23.295803 kernel: loop: module loaded Feb 8 23:38:23.295814 systemd[1]: Starting systemd-modules-load.service... Feb 8 23:38:23.295826 systemd[1]: Starting systemd-network-generator.service... Feb 8 23:38:23.295837 systemd[1]: Starting systemd-remount-fs.service... Feb 8 23:38:23.295849 systemd[1]: Starting systemd-udev-trigger.service... Feb 8 23:38:23.295861 systemd[1]: verity-setup.service: Deactivated successfully. Feb 8 23:38:23.295872 systemd[1]: Stopped verity-setup.service. Feb 8 23:38:23.295885 systemd[1]: xenserver-pv-version.service was skipped because of an unmet condition check (ConditionVirtualization=xen). Feb 8 23:38:23.295895 systemd[1]: Mounted dev-hugepages.mount. Feb 8 23:38:23.295906 systemd[1]: Mounted dev-mqueue.mount. Feb 8 23:38:23.295916 systemd[1]: Mounted media.mount. Feb 8 23:38:23.295926 kernel: fuse: init (API version 7.34) Feb 8 23:38:23.295935 systemd[1]: Mounted sys-kernel-debug.mount. Feb 8 23:38:23.295945 systemd[1]: Mounted sys-kernel-tracing.mount. Feb 8 23:38:23.295954 systemd[1]: Mounted tmp.mount. Feb 8 23:38:23.295964 systemd[1]: Finished flatcar-tmpfiles.service. Feb 8 23:38:23.295986 systemd-journald[1173]: Journal started Feb 8 23:38:23.296038 systemd-journald[1173]: Runtime Journal (/run/log/journal/8ebcda01711b4779963f5a69259097bb) is 8.0M, max 159.0M, 151.0M free. Feb 8 23:38:11.752000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Feb 8 23:38:12.505000 audit[1]: AVC avc: denied { integrity } for pid=1 comm="systemd" lockdown_reason="/dev/mem,kmem,port" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:38:12.527000 audit[1]: AVC avc: denied { bpf } for pid=1 comm="systemd" capability=39 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:38:12.527000 audit[1]: AVC avc: denied { perfmon } for pid=1 comm="systemd" capability=38 scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=capability2 permissive=1 Feb 8 23:38:12.527000 audit: BPF prog-id=10 op=LOAD Feb 8 23:38:12.527000 audit: BPF prog-id=10 op=UNLOAD Feb 8 23:38:12.527000 audit: BPF prog-id=11 op=LOAD Feb 8 23:38:12.527000 audit: BPF prog-id=11 op=UNLOAD Feb 8 23:38:14.085000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="docker" dev="tmpfs" ino=2 scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 srawcon="system_u:object_r:container_file_t:s0:c1022,c1023" Feb 8 23:38:14.085000 audit[1064]: SYSCALL arch=c000003e syscall=188 success=yes exit=0 a0=c00014d8a2 a1=c0000cedf8 a2=c0000d70c0 a3=32 items=0 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:14.085000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:14.092000 audit[1064]: AVC avc: denied { associate } for pid=1064 comm="torcx-generator" name="bin" scontext=system_u:object_r:unlabeled_t:s0 tcontext=system_u:object_r:tmpfs_t:s0 tclass=filesystem permissive=1 Feb 8 23:38:14.092000 audit[1064]: SYSCALL arch=c000003e syscall=258 success=yes exit=0 a0=ffffffffffffff9c a1=c00014d979 a2=1ed a3=0 items=2 ppid=1047 pid=1064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="torcx-generator" exe="/usr/lib/systemd/system-generators/torcx-generator" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:14.092000 audit: CWD cwd="/" Feb 8 23:38:14.092000 audit: PATH item=0 name=(null) inode=2 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:14.092000 audit: PATH item=1 name=(null) inode=3 dev=00:1c mode=040755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:unlabeled_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:14.092000 audit: PROCTITLE proctitle=2F7573722F6C69622F73797374656D642F73797374656D2D67656E657261746F72732F746F7263782D67656E657261746F72002F72756E2F73797374656D642F67656E657261746F72002F72756E2F73797374656D642F67656E657261746F722E6561726C79002F72756E2F73797374656D642F67656E657261746F722E6C61 Feb 8 23:38:22.821000 audit: BPF prog-id=12 op=LOAD Feb 8 23:38:22.821000 audit: BPF prog-id=3 op=UNLOAD Feb 8 23:38:22.827000 audit: BPF prog-id=13 op=LOAD Feb 8 23:38:22.832000 audit: BPF prog-id=14 op=LOAD Feb 8 23:38:22.832000 audit: BPF prog-id=4 op=UNLOAD Feb 8 23:38:22.832000 audit: BPF prog-id=5 op=UNLOAD Feb 8 23:38:22.837000 audit: BPF prog-id=15 op=LOAD Feb 8 23:38:22.837000 audit: BPF prog-id=12 op=UNLOAD Feb 8 23:38:22.851000 audit: BPF prog-id=16 op=LOAD Feb 8 23:38:22.860000 audit: BPF prog-id=17 op=LOAD Feb 8 23:38:22.860000 audit: BPF prog-id=13 op=UNLOAD Feb 8 23:38:22.860000 audit: BPF prog-id=14 op=UNLOAD Feb 8 23:38:22.861000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:22.878000 audit: BPF prog-id=15 op=UNLOAD Feb 8 23:38:22.880000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:22.889000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:22.899000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:22.899000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.189000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.199000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.204000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.204000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.205000 audit: BPF prog-id=18 op=LOAD Feb 8 23:38:23.205000 audit: BPF prog-id=19 op=LOAD Feb 8 23:38:23.205000 audit: BPF prog-id=20 op=LOAD Feb 8 23:38:23.205000 audit: BPF prog-id=16 op=UNLOAD Feb 8 23:38:23.205000 audit: BPF prog-id=17 op=UNLOAD Feb 8 23:38:23.256000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.290000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Feb 8 23:38:23.290000 audit[1173]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffe976cbf50 a2=4000 a3=7ffe976cbfec items=0 ppid=1 pid=1173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:23.290000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Feb 8 23:38:14.039187 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:38:22.819508 systemd[1]: Queued start job for default target multi-user.target. Feb 8 23:38:14.054305 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:38:22.861878 systemd[1]: systemd-journald.service: Deactivated successfully. Feb 8 23:38:14.054350 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:38:14.054390 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=info msg="no vendor profile selected by /etc/flatcar/docker-1.12" Feb 8 23:38:14.054405 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="skipped missing lower profile" missing profile=oem Feb 8 23:38:14.054477 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=warning msg="no next profile: unable to read profile file: open /etc/torcx/next-profile: no such file or directory" Feb 8 23:38:14.054501 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="apply configuration parsed" lower profiles (vendor/oem)="[vendor]" upper profile (user)= Feb 8 23:38:14.054737 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="mounted tmpfs" target=/run/torcx/unpack Feb 8 23:38:14.054800 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="profile found" name=docker-1.12-no path=/usr/share/torcx/profiles/docker-1.12-no.json Feb 8 23:38:14.054817 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="profile found" name=vendor path=/usr/share/torcx/profiles/vendor.json Feb 8 23:38:14.069937 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:20.10.torcx.tgz" reference=20.10 Feb 8 23:38:14.069985 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=debug msg="new archive/reference added to cache" format=tgz name=docker path="/usr/share/torcx/store/docker:com.coreos.cl.torcx.tgz" reference=com.coreos.cl Feb 8 23:38:14.070009 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store/3510.3.2: no such file or directory" path=/usr/share/oem/torcx/store/3510.3.2 Feb 8 23:38:14.070026 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=info msg="store skipped" err="open /usr/share/oem/torcx/store: no such file or directory" path=/usr/share/oem/torcx/store Feb 8 23:38:14.070045 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store/3510.3.2: no such file or directory" path=/var/lib/torcx/store/3510.3.2 Feb 8 23:38:14.070066 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:14Z" level=info msg="store skipped" err="open /var/lib/torcx/store: no such file or directory" path=/var/lib/torcx/store Feb 8 23:38:21.667901 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="image unpacked" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:38:21.668181 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="binaries propagated" assets="[/bin/containerd /bin/containerd-shim /bin/ctr /bin/docker /bin/docker-containerd /bin/docker-containerd-shim /bin/docker-init /bin/docker-proxy /bin/docker-runc /bin/dockerd /bin/runc /bin/tini]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:38:21.668324 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="networkd units propagated" assets="[/lib/systemd/network/50-docker.network /lib/systemd/network/90-docker-veth.network]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:38:21.668540 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="systemd units propagated" assets="[/lib/systemd/system/containerd.service /lib/systemd/system/docker.service /lib/systemd/system/docker.socket /lib/systemd/system/sockets.target.wants /lib/systemd/system/multi-user.target.wants]" image=docker path=/run/torcx/unpack/docker reference=com.coreos.cl Feb 8 23:38:21.668598 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="profile applied" sealed profile=/run/torcx/profile.json upper profile= Feb 8 23:38:21.668673 /usr/lib/systemd/system-generators/torcx-generator[1064]: time="2024-02-08T23:38:21Z" level=debug msg="system state sealed" content="[TORCX_LOWER_PROFILES=\"vendor\" TORCX_UPPER_PROFILE=\"\" TORCX_PROFILE_PATH=\"/run/torcx/profile.json\" TORCX_BINDIR=\"/run/torcx/bin\" TORCX_UNPACKDIR=\"/run/torcx/unpack\"]" path=/run/metadata/torcx Feb 8 23:38:23.300000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.304473 systemd[1]: Started systemd-journald.service. Feb 8 23:38:23.306000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.307086 systemd[1]: Finished kmod-static-nodes.service. Feb 8 23:38:23.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.309394 systemd[1]: modprobe@configfs.service: Deactivated successfully. Feb 8 23:38:23.309563 systemd[1]: Finished modprobe@configfs.service. Feb 8 23:38:23.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.311986 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Feb 8 23:38:23.312121 systemd[1]: Finished modprobe@dm_mod.service. Feb 8 23:38:23.313000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.313000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.314330 systemd[1]: modprobe@drm.service: Deactivated successfully. Feb 8 23:38:23.314609 systemd[1]: Finished modprobe@drm.service. Feb 8 23:38:23.316000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.316000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.316931 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Feb 8 23:38:23.317063 systemd[1]: Finished modprobe@efi_pstore.service. Feb 8 23:38:23.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.319354 systemd[1]: modprobe@fuse.service: Deactivated successfully. Feb 8 23:38:23.319537 systemd[1]: Finished modprobe@fuse.service. Feb 8 23:38:23.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.321835 systemd[1]: modprobe@loop.service: Deactivated successfully. Feb 8 23:38:23.321971 systemd[1]: Finished modprobe@loop.service. Feb 8 23:38:23.323000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.323000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.324321 systemd[1]: Finished systemd-network-generator.service. Feb 8 23:38:23.326000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.326892 systemd[1]: Finished systemd-remount-fs.service. Feb 8 23:38:23.328000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.329331 systemd[1]: Reached target network-pre.target. Feb 8 23:38:23.332852 systemd[1]: Mounting sys-fs-fuse-connections.mount... Feb 8 23:38:23.336013 systemd[1]: Mounting sys-kernel-config.mount... Feb 8 23:38:23.339185 systemd[1]: remount-root.service was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Feb 8 23:38:23.365636 systemd[1]: Starting systemd-hwdb-update.service... Feb 8 23:38:23.369715 systemd[1]: Starting systemd-journal-flush.service... Feb 8 23:38:23.372194 systemd[1]: systemd-pstore.service was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Feb 8 23:38:23.385000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.373699 systemd[1]: Starting systemd-random-seed.service... Feb 8 23:38:23.376055 systemd[1]: systemd-repart.service was skipped because no trigger condition checks were met. Feb 8 23:38:23.377575 systemd[1]: Starting systemd-sysusers.service... Feb 8 23:38:23.382149 systemd[1]: Finished systemd-modules-load.service. Feb 8 23:38:23.387278 systemd[1]: Mounted sys-fs-fuse-connections.mount. Feb 8 23:38:23.389670 systemd[1]: Mounted sys-kernel-config.mount. Feb 8 23:38:23.393861 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:38:23.409214 systemd[1]: Finished systemd-random-seed.service. Feb 8 23:38:23.411000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.412020 systemd[1]: Reached target first-boot-complete.target. Feb 8 23:38:23.423359 systemd-journald[1173]: Time spent on flushing to /var/log/journal/8ebcda01711b4779963f5a69259097bb is 23.488ms for 1187 entries. Feb 8 23:38:23.423359 systemd-journald[1173]: System Journal (/var/log/journal/8ebcda01711b4779963f5a69259097bb) is 8.0M, max 2.6G, 2.6G free. Feb 8 23:38:23.499075 systemd-journald[1173]: Received client request to flush runtime journal. Feb 8 23:38:23.435000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.468000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.433709 systemd[1]: Finished systemd-udev-trigger.service. Feb 8 23:38:23.499431 udevadm[1187]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Feb 8 23:38:23.437086 systemd[1]: Starting systemd-udev-settle.service... Feb 8 23:38:23.466624 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:38:23.500026 systemd[1]: Finished systemd-journal-flush.service. Feb 8 23:38:23.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.853881 systemd[1]: Finished systemd-sysusers.service. Feb 8 23:38:23.856000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:23.857618 systemd[1]: Starting systemd-tmpfiles-setup-dev.service... Feb 8 23:38:24.231423 systemd[1]: Finished systemd-tmpfiles-setup-dev.service. Feb 8 23:38:24.233000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:24.588427 systemd[1]: Finished systemd-hwdb-update.service. Feb 8 23:38:24.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:24.591000 audit: BPF prog-id=21 op=LOAD Feb 8 23:38:24.591000 audit: BPF prog-id=22 op=LOAD Feb 8 23:38:24.591000 audit: BPF prog-id=7 op=UNLOAD Feb 8 23:38:24.591000 audit: BPF prog-id=8 op=UNLOAD Feb 8 23:38:24.592722 systemd[1]: Starting systemd-udevd.service... Feb 8 23:38:24.609871 systemd-udevd[1192]: Using default interface naming scheme 'v252'. Feb 8 23:38:24.980490 systemd[1]: Started systemd-udevd.service. Feb 8 23:38:24.986000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:24.987000 audit: BPF prog-id=23 op=LOAD Feb 8 23:38:24.990235 systemd[1]: Starting systemd-networkd.service... Feb 8 23:38:25.025265 systemd[1]: Condition check resulted in dev-ttyS0.device being skipped. Feb 8 23:38:25.082000 audit[1206]: AVC avc: denied { confidentiality } for pid=1206 comm="(udev-worker)" lockdown_reason="use of tracefs" scontext=system_u:system_r:kernel_t:s0 tcontext=system_u:system_r:kernel_t:s0 tclass=lockdown permissive=1 Feb 8 23:38:25.098472 kernel: hv_vmbus: registering driver hv_balloon Feb 8 23:38:25.109494 kernel: mousedev: PS/2 mouse device common for all mice Feb 8 23:38:25.113471 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Feb 8 23:38:25.124887 kernel: hv_vmbus: registering driver hyperv_fb Feb 8 23:38:25.124964 kernel: hv_utils: Registering HyperV Utility Driver Feb 8 23:38:25.124993 kernel: hv_vmbus: registering driver hv_utils Feb 8 23:38:25.416273 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Feb 8 23:38:25.416327 kernel: hv_utils: Heartbeat IC version 3.0 Feb 8 23:38:25.416349 kernel: hv_utils: Shutdown IC version 3.2 Feb 8 23:38:25.416372 kernel: hv_utils: TimeSync IC version 4.0 Feb 8 23:38:25.416395 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Feb 8 23:38:25.082000 audit[1206]: SYSCALL arch=c000003e syscall=175 success=yes exit=0 a0=55d30afaf620 a1=f884 a2=7f4e07f5ebc5 a3=5 items=12 ppid=1192 pid=1206 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="(udev-worker)" exe="/usr/bin/udevadm" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:25.082000 audit: CWD cwd="/" Feb 8 23:38:25.082000 audit: PATH item=0 name=(null) inode=235 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=1 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=2 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=3 name=(null) inode=14624 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=4 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=5 name=(null) inode=14625 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=6 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=7 name=(null) inode=14626 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=8 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=9 name=(null) inode=14627 dev=00:0b mode=0100640 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=10 name=(null) inode=14623 dev=00:0b mode=040750 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PATH item=11 name=(null) inode=14628 dev=00:0b mode=0100440 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:tracefs_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 Feb 8 23:38:25.082000 audit: PROCTITLE proctitle="(udev-worker)" Feb 8 23:38:25.424000 audit: BPF prog-id=24 op=LOAD Feb 8 23:38:25.424000 audit: BPF prog-id=25 op=LOAD Feb 8 23:38:25.424000 audit: BPF prog-id=26 op=LOAD Feb 8 23:38:25.426712 systemd[1]: Starting systemd-userdbd.service... Feb 8 23:38:25.433691 kernel: Console: switching to colour dummy device 80x25 Feb 8 23:38:25.439674 kernel: Console: switching to colour frame buffer device 128x48 Feb 8 23:38:25.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:25.493252 systemd[1]: Started systemd-userdbd.service. Feb 8 23:38:25.642034 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Feb 8 23:38:25.692031 kernel: BTRFS info: devid 1 device path /dev/disk/by-label/OEM changed to /dev/sda6 scanned by (udev-worker) (1198) Feb 8 23:38:25.712380 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device. Feb 8 23:38:25.752793 systemd[1]: Finished systemd-udev-settle.service. Feb 8 23:38:25.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:25.756814 systemd[1]: Starting lvm2-activation-early.service... Feb 8 23:38:25.814329 systemd-networkd[1207]: lo: Link UP Feb 8 23:38:25.814338 systemd-networkd[1207]: lo: Gained carrier Feb 8 23:38:25.814886 systemd-networkd[1207]: Enumeration completed Feb 8 23:38:25.814991 systemd[1]: Started systemd-networkd.service. Feb 8 23:38:25.816000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:25.818394 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:38:25.844093 systemd-networkd[1207]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:38:25.898047 kernel: mlx5_core 49ac:00:02.0 enP18860s1: Link up Feb 8 23:38:25.944074 kernel: hv_netvsc 0022489b-2d15-0022-489b-2d150022489b eth0: Data path switched to VF: enP18860s1 Feb 8 23:38:25.945692 systemd-networkd[1207]: enP18860s1: Link UP Feb 8 23:38:25.945868 systemd-networkd[1207]: eth0: Link UP Feb 8 23:38:25.945875 systemd-networkd[1207]: eth0: Gained carrier Feb 8 23:38:25.949286 systemd-networkd[1207]: enP18860s1: Gained carrier Feb 8 23:38:25.976124 systemd-networkd[1207]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:38:26.092305 lvm[1268]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:38:26.120930 systemd[1]: Finished lvm2-activation-early.service. Feb 8 23:38:26.123330 systemd[1]: Reached target cryptsetup.target. Feb 8 23:38:26.120000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:26.126394 systemd[1]: Starting lvm2-activation.service... Feb 8 23:38:26.132291 lvm[1270]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Feb 8 23:38:26.157054 systemd[1]: Finished lvm2-activation.service. Feb 8 23:38:26.158000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:26.159640 systemd[1]: Reached target local-fs-pre.target. Feb 8 23:38:26.161854 systemd[1]: var-lib-machines.mount was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Feb 8 23:38:26.161891 systemd[1]: Reached target local-fs.target. Feb 8 23:38:26.164013 systemd[1]: Reached target machines.target. Feb 8 23:38:26.167050 systemd[1]: Starting ldconfig.service... Feb 8 23:38:26.176042 systemd[1]: systemd-binfmt.service was skipped because no trigger condition checks were met. Feb 8 23:38:26.176155 systemd[1]: systemd-boot-system-token.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:26.177354 systemd[1]: Starting systemd-boot-update.service... Feb 8 23:38:26.180401 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service... Feb 8 23:38:26.183875 systemd[1]: Starting systemd-machine-id-commit.service... Feb 8 23:38:26.185957 systemd[1]: systemd-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:38:26.186061 systemd[1]: ensure-sysext.service was skipped because no trigger condition checks were met. Feb 8 23:38:26.187164 systemd[1]: Starting systemd-tmpfiles-setup.service... Feb 8 23:38:26.635616 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1272 (bootctl) Feb 8 23:38:26.637590 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service... Feb 8 23:38:26.640060 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Feb 8 23:38:27.094300 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Feb 8 23:38:27.094930 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service. Feb 8 23:38:27.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:27.127132 systemd-tmpfiles[1275]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Feb 8 23:38:27.255220 systemd-networkd[1207]: eth0: Gained IPv6LL Feb 8 23:38:27.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:27.259095 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:38:27.271345 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Feb 8 23:38:27.272000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:27.273565 systemd[1]: Finished systemd-machine-id-commit.service. Feb 8 23:38:28.140230 systemd-fsck[1281]: fsck.fat 4.2 (2021-01-31) Feb 8 23:38:28.140230 systemd-fsck[1281]: /dev/sda1: 789 files, 115332/258078 clusters Feb 8 23:38:28.142512 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service. Feb 8 23:38:28.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:28.148245 systemd[1]: Mounting boot.mount... Feb 8 23:38:28.151243 kernel: kauditd_printk_skb: 79 callbacks suppressed Feb 8 23:38:28.151311 kernel: audit: type=1130 audit(1707435508.145:162): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:28.190972 systemd[1]: Mounted boot.mount. Feb 8 23:38:28.205405 systemd[1]: Finished systemd-boot-update.service. Feb 8 23:38:28.206000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:28.217229 kernel: audit: type=1130 audit(1707435508.206:163): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.499686 systemd[1]: Finished systemd-tmpfiles-setup.service. Feb 8 23:38:29.501000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.503683 systemd[1]: Starting audit-rules.service... Feb 8 23:38:29.514030 kernel: audit: type=1130 audit(1707435509.501:164): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.516628 systemd[1]: Starting clean-ca-certificates.service... Feb 8 23:38:29.528034 kernel: audit: type=1334 audit(1707435509.522:165): prog-id=27 op=LOAD Feb 8 23:38:29.522000 audit: BPF prog-id=27 op=LOAD Feb 8 23:38:29.520050 systemd[1]: Starting systemd-journal-catalog-update.service... Feb 8 23:38:29.524270 systemd[1]: Starting systemd-resolved.service... Feb 8 23:38:29.529000 audit: BPF prog-id=28 op=LOAD Feb 8 23:38:29.531182 systemd[1]: Starting systemd-timesyncd.service... Feb 8 23:38:29.534028 kernel: audit: type=1334 audit(1707435509.529:166): prog-id=28 op=LOAD Feb 8 23:38:29.536427 systemd[1]: Starting systemd-update-utmp.service... Feb 8 23:38:29.586000 audit[1293]: SYSTEM_BOOT pid=1293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.601287 kernel: audit: type=1127 audit(1707435509.586:167): pid=1293 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.588528 systemd[1]: Finished systemd-update-utmp.service. Feb 8 23:38:29.601000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.616037 kernel: audit: type=1130 audit(1707435509.601:168): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.618053 systemd[1]: Finished clean-ca-certificates.service. Feb 8 23:38:29.619000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.620282 systemd[1]: update-ca-certificates.service was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Feb 8 23:38:29.631492 kernel: audit: type=1130 audit(1707435509.619:169): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.658539 systemd[1]: Started systemd-timesyncd.service. Feb 8 23:38:29.660000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.661299 systemd[1]: Reached target time-set.target. Feb 8 23:38:29.673412 kernel: audit: type=1130 audit(1707435509.660:170): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.690855 systemd[1]: Finished systemd-journal-catalog-update.service. Feb 8 23:38:29.692000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.706288 kernel: audit: type=1130 audit(1707435509.692:171): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.736116 systemd-resolved[1290]: Positive Trust Anchors: Feb 8 23:38:29.736134 systemd-resolved[1290]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Feb 8 23:38:29.736183 systemd-resolved[1290]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Feb 8 23:38:29.845602 systemd-resolved[1290]: Using system hostname 'ci-3510.3.2-a-3441531bae'. Feb 8 23:38:29.847618 systemd[1]: Started systemd-resolved.service. Feb 8 23:38:29.848233 systemd-timesyncd[1292]: Contacted time server 188.125.64.7:123 (0.flatcar.pool.ntp.org). Feb 8 23:38:29.849000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Feb 8 23:38:29.848284 systemd-timesyncd[1292]: Initial clock synchronization to Thu 2024-02-08 23:38:29.839996 UTC. Feb 8 23:38:29.849902 systemd[1]: Reached target network.target. Feb 8 23:38:29.851823 systemd[1]: Reached target network-online.target. Feb 8 23:38:29.853703 systemd[1]: Reached target nss-lookup.target. Feb 8 23:38:29.927000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Feb 8 23:38:29.927000 audit[1308]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffefb122210 a2=420 a3=0 items=0 ppid=1287 pid=1308 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Feb 8 23:38:29.927000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Feb 8 23:38:29.928899 augenrules[1308]: No rules Feb 8 23:38:29.929595 systemd[1]: Finished audit-rules.service. Feb 8 23:38:34.518582 ldconfig[1271]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Feb 8 23:38:34.528145 systemd[1]: Finished ldconfig.service. Feb 8 23:38:34.531735 systemd[1]: Starting systemd-update-done.service... Feb 8 23:38:34.554958 systemd[1]: Finished systemd-update-done.service. Feb 8 23:38:34.557499 systemd[1]: Reached target sysinit.target. Feb 8 23:38:34.559748 systemd[1]: Started motdgen.path. Feb 8 23:38:34.561735 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path. Feb 8 23:38:34.565027 systemd[1]: Started logrotate.timer. Feb 8 23:38:34.566737 systemd[1]: Started mdadm.timer. Feb 8 23:38:34.568249 systemd[1]: Started systemd-tmpfiles-clean.timer. Feb 8 23:38:34.570209 systemd[1]: update-engine-stub.timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Feb 8 23:38:34.570243 systemd[1]: Reached target paths.target. Feb 8 23:38:34.571902 systemd[1]: Reached target timers.target. Feb 8 23:38:34.573855 systemd[1]: Listening on dbus.socket. Feb 8 23:38:34.576559 systemd[1]: Starting docker.socket... Feb 8 23:38:34.580788 systemd[1]: Listening on sshd.socket. Feb 8 23:38:34.583043 systemd[1]: systemd-pcrphase-sysinit.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:34.583470 systemd[1]: Listening on docker.socket. Feb 8 23:38:34.585386 systemd[1]: Reached target sockets.target. Feb 8 23:38:34.587174 systemd[1]: Reached target basic.target. Feb 8 23:38:34.588898 systemd[1]: addon-config@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:38:34.588928 systemd[1]: addon-run@usr-share-oem.service was skipped because no trigger condition checks were met. Feb 8 23:38:34.589831 systemd[1]: Starting containerd.service... Feb 8 23:38:34.592680 systemd[1]: Starting dbus.service... Feb 8 23:38:34.595160 systemd[1]: Starting enable-oem-cloudinit.service... Feb 8 23:38:34.598497 systemd[1]: Starting extend-filesystems.service... Feb 8 23:38:34.600520 systemd[1]: flatcar-setup-environment.service was skipped because of an unmet condition check (ConditionPathExists=/usr/share/oem/bin/flatcar-setup-environment). Feb 8 23:38:34.601540 systemd[1]: Starting motdgen.service... Feb 8 23:38:34.604054 systemd[1]: Started nvidia.service. Feb 8 23:38:34.607139 systemd[1]: Starting prepare-cni-plugins.service... Feb 8 23:38:34.610026 systemd[1]: Starting prepare-critools.service... Feb 8 23:38:34.613229 systemd[1]: Starting ssh-key-proc-cmdline.service... Feb 8 23:38:34.616467 systemd[1]: Starting sshd-keygen.service... Feb 8 23:38:34.621133 systemd[1]: Starting systemd-logind.service... Feb 8 23:38:34.623141 systemd[1]: systemd-pcrphase.service was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Feb 8 23:38:34.623216 systemd[1]: tcsd.service was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Feb 8 23:38:34.623735 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Feb 8 23:38:34.624572 systemd[1]: Starting update-engine.service... Feb 8 23:38:34.627528 systemd[1]: Starting update-ssh-keys-after-ignition.service... Feb 8 23:38:34.633573 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Feb 8 23:38:34.633832 systemd[1]: Finished ssh-key-proc-cmdline.service. Feb 8 23:38:34.675646 jq[1318]: false Feb 8 23:38:34.676105 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Feb 8 23:38:34.676298 systemd[1]: Condition check resulted in enable-oem-cloudinit.service being skipped. Feb 8 23:38:34.676553 jq[1333]: true Feb 8 23:38:34.698398 systemd[1]: motdgen.service: Deactivated successfully. Feb 8 23:38:34.698599 systemd[1]: Finished motdgen.service. Feb 8 23:38:34.715488 jq[1340]: true Feb 8 23:38:34.742416 extend-filesystems[1319]: Found sda Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda1 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda2 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda3 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found usr Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda4 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda6 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda7 Feb 8 23:38:34.745347 extend-filesystems[1319]: Found sda9 Feb 8 23:38:34.745347 extend-filesystems[1319]: Checking size of /dev/sda9 Feb 8 23:38:34.770352 systemd-logind[1329]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Feb 8 23:38:34.774711 systemd-logind[1329]: New seat seat0. Feb 8 23:38:34.788272 env[1342]: time="2024-02-08T23:38:34.788230627Z" level=info msg="starting containerd" revision=92b3a9d6f1b3bcc6dc74875cfdea653fe39f09c2 version=1.6.16 Feb 8 23:38:34.834423 env[1342]: time="2024-02-08T23:38:34.834383173Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Feb 8 23:38:34.834759 env[1342]: time="2024-02-08T23:38:34.834732236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836075 env[1342]: time="2024-02-08T23:38:34.836037923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.15.148-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836171 env[1342]: time="2024-02-08T23:38:34.836156276Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836473 env[1342]: time="2024-02-08T23:38:34.836451160Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836557 env[1342]: time="2024-02-08T23:38:34.836542824Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836624 env[1342]: time="2024-02-08T23:38:34.836610297Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Feb 8 23:38:34.836686 env[1342]: time="2024-02-08T23:38:34.836673673Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.836821 env[1342]: time="2024-02-08T23:38:34.836807720Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.839459 env[1342]: time="2024-02-08T23:38:34.839435586Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Feb 8 23:38:34.839753 env[1342]: time="2024-02-08T23:38:34.839723473Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Feb 8 23:38:34.842163 env[1342]: time="2024-02-08T23:38:34.842139523Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Feb 8 23:38:34.842319 env[1342]: time="2024-02-08T23:38:34.842299060Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Feb 8 23:38:34.842415 env[1342]: time="2024-02-08T23:38:34.842397721Z" level=info msg="metadata content store policy set" policy=shared Feb 8 23:38:34.846204 extend-filesystems[1319]: Old size kept for /dev/sda9 Feb 8 23:38:34.851177 extend-filesystems[1319]: Found sr0 Feb 8 23:38:34.846755 systemd[1]: extend-filesystems.service: Deactivated successfully. Feb 8 23:38:34.846918 systemd[1]: Finished extend-filesystems.service. Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858645330Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858686014Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858706306Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858751788Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858771381Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858789973Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858853448Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858872541Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858928319Z" level=info msg="loading plugin \"io.containerd.service.v1.leases-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858947111Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858966104Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.858983297Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.859099452Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Feb 8 23:38:34.860967 env[1342]: time="2024-02-08T23:38:34.859183119Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859561370Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859593357Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859610950Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859677024Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859695317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.859713010Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860092161Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860114952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860133445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860150138Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860166632Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860188123Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860317472Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860335665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.862645 env[1342]: time="2024-02-08T23:38:34.860982411Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.863171 bash[1360]: Updated "/home/core/.ssh/authorized_keys" Feb 8 23:38:34.869929 tar[1336]: crictl Feb 8 23:38:34.870200 env[1342]: time="2024-02-08T23:38:34.866348600Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Feb 8 23:38:34.870200 env[1342]: time="2024-02-08T23:38:34.866373790Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Feb 8 23:38:34.863944 systemd[1]: Finished update-ssh-keys-after-ignition.service. Feb 8 23:38:34.870363 tar[1335]: ./ Feb 8 23:38:34.870363 tar[1335]: ./loopback Feb 8 23:38:34.873067 env[1342]: time="2024-02-08T23:38:34.871167405Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Feb 8 23:38:34.873067 env[1342]: time="2024-02-08T23:38:34.871205490Z" level=error msg="failed to initialize a tracing processor \"otlp\"" error="no OpenTelemetry endpoint: skip plugin" Feb 8 23:38:34.873067 env[1342]: time="2024-02-08T23:38:34.871248473Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Feb 8 23:38:34.873241 env[1342]: time="2024-02-08T23:38:34.871510370Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.6 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Feb 8 23:38:34.873241 env[1342]: time="2024-02-08T23:38:34.871583041Z" level=info msg="Connect containerd service" Feb 8 23:38:34.873241 env[1342]: time="2024-02-08T23:38:34.871627224Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Feb 8 23:38:34.873241 env[1342]: time="2024-02-08T23:38:34.872343942Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.875437425Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.875484906Z" level=info msg=serving... address=/run/containerd/containerd.sock Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.875798183Z" level=info msg="containerd successfully booted in 0.088392s" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.877702534Z" level=info msg="Start subscribing containerd event" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.877775705Z" level=info msg="Start recovering state" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.882022935Z" level=info msg="Start event monitor" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.882054622Z" level=info msg="Start snapshots syncer" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.882069616Z" level=info msg="Start cni network conf syncer for default" Feb 8 23:38:34.905510 env[1342]: time="2024-02-08T23:38:34.882087409Z" level=info msg="Start streaming server" Feb 8 23:38:34.875586 systemd[1]: Started containerd.service. Feb 8 23:38:34.955997 tar[1335]: ./bandwidth Feb 8 23:38:35.061799 tar[1335]: ./ptp Feb 8 23:38:35.074234 systemd[1]: nvidia.service: Deactivated successfully. Feb 8 23:38:35.112570 dbus-daemon[1317]: [system] SELinux support is enabled Feb 8 23:38:35.118536 dbus-daemon[1317]: [system] Successfully activated service 'org.freedesktop.systemd1' Feb 8 23:38:35.112725 systemd[1]: Started dbus.service. Feb 8 23:38:35.117958 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Feb 8 23:38:35.117985 systemd[1]: Reached target system-config.target. Feb 8 23:38:35.120341 systemd[1]: user-cloudinit-proc-cmdline.service was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Feb 8 23:38:35.120362 systemd[1]: Reached target user-config.target. Feb 8 23:38:35.122534 systemd[1]: Started systemd-logind.service. Feb 8 23:38:35.177477 tar[1335]: ./vlan Feb 8 23:38:35.269458 tar[1335]: ./host-device Feb 8 23:38:35.354882 tar[1335]: ./tuning Feb 8 23:38:35.433681 tar[1335]: ./vrf Feb 8 23:38:35.519967 tar[1335]: ./sbr Feb 8 23:38:35.541889 systemd[1]: Finished prepare-critools.service. Feb 8 23:38:35.570979 tar[1335]: ./tap Feb 8 23:38:35.614326 update_engine[1332]: I0208 23:38:35.613474 1332 main.cc:92] Flatcar Update Engine starting Feb 8 23:38:35.618555 tar[1335]: ./dhcp Feb 8 23:38:35.666918 systemd[1]: Started update-engine.service. Feb 8 23:38:35.667537 update_engine[1332]: I0208 23:38:35.667424 1332 update_check_scheduler.cc:74] Next update check in 4m19s Feb 8 23:38:35.671779 systemd[1]: Started locksmithd.service. Feb 8 23:38:35.736781 tar[1335]: ./static Feb 8 23:38:35.768738 tar[1335]: ./firewall Feb 8 23:38:35.817475 tar[1335]: ./macvlan Feb 8 23:38:35.861925 tar[1335]: ./dummy Feb 8 23:38:35.905603 tar[1335]: ./bridge Feb 8 23:38:35.953673 tar[1335]: ./ipvlan Feb 8 23:38:35.997883 tar[1335]: ./portmap Feb 8 23:38:36.039603 tar[1335]: ./host-local Feb 8 23:38:36.122342 systemd[1]: Finished prepare-cni-plugins.service. Feb 8 23:38:37.332360 locksmithd[1423]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Feb 8 23:38:37.940770 sshd_keygen[1339]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Feb 8 23:38:37.960210 systemd[1]: Finished sshd-keygen.service. Feb 8 23:38:37.964258 systemd[1]: Starting issuegen.service... Feb 8 23:38:37.967607 systemd[1]: Started waagent.service. Feb 8 23:38:37.973578 systemd[1]: issuegen.service: Deactivated successfully. Feb 8 23:38:37.973710 systemd[1]: Finished issuegen.service. Feb 8 23:38:37.976889 systemd[1]: Starting systemd-user-sessions.service... Feb 8 23:38:37.984145 systemd[1]: Finished systemd-user-sessions.service. Feb 8 23:38:37.987519 systemd[1]: Started getty@tty1.service. Feb 8 23:38:37.990762 systemd[1]: Started serial-getty@ttyS0.service. Feb 8 23:38:37.993151 systemd[1]: Reached target getty.target. Feb 8 23:38:37.995273 systemd[1]: Reached target multi-user.target. Feb 8 23:38:37.998730 systemd[1]: Starting systemd-update-utmp-runlevel.service... Feb 8 23:38:38.005885 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Feb 8 23:38:38.006092 systemd[1]: Finished systemd-update-utmp-runlevel.service. Feb 8 23:38:38.008645 systemd[1]: Startup finished in 918ms (firmware) + 28.044s (loader) + 859ms (kernel) + 15.500s (initrd) + 26.524s (userspace) = 1min 11.847s. Feb 8 23:38:38.432244 login[1443]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:38:38.433764 login[1444]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Feb 8 23:38:38.456992 systemd[1]: Created slice user-500.slice. Feb 8 23:38:38.458321 systemd[1]: Starting user-runtime-dir@500.service... Feb 8 23:38:38.460751 systemd-logind[1329]: New session 2 of user core. Feb 8 23:38:38.464512 systemd-logind[1329]: New session 1 of user core. Feb 8 23:38:38.495625 systemd[1]: Finished user-runtime-dir@500.service. Feb 8 23:38:38.497313 systemd[1]: Starting user@500.service... Feb 8 23:38:38.500484 (systemd)[1447]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:38:38.774519 systemd[1447]: Queued start job for default target default.target. Feb 8 23:38:38.775261 systemd[1447]: Reached target paths.target. Feb 8 23:38:38.775301 systemd[1447]: Reached target sockets.target. Feb 8 23:38:38.775322 systemd[1447]: Reached target timers.target. Feb 8 23:38:38.775342 systemd[1447]: Reached target basic.target. Feb 8 23:38:38.775409 systemd[1447]: Reached target default.target. Feb 8 23:38:38.775453 systemd[1447]: Startup finished in 269ms. Feb 8 23:38:38.775493 systemd[1]: Started user@500.service. Feb 8 23:38:38.777105 systemd[1]: Started session-1.scope. Feb 8 23:38:38.778120 systemd[1]: Started session-2.scope. Feb 8 23:38:44.683984 waagent[1438]: 2024-02-08T23:38:44.683868Z INFO Daemon Daemon Azure Linux Agent Version:2.6.0.2 Feb 8 23:38:44.687744 waagent[1438]: 2024-02-08T23:38:44.687670Z INFO Daemon Daemon OS: flatcar 3510.3.2 Feb 8 23:38:44.689965 waagent[1438]: 2024-02-08T23:38:44.689904Z INFO Daemon Daemon Python: 3.9.16 Feb 8 23:38:44.692155 waagent[1438]: 2024-02-08T23:38:44.692080Z INFO Daemon Daemon Run daemon Feb 8 23:38:44.694520 waagent[1438]: 2024-02-08T23:38:44.694451Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3510.3.2' Feb 8 23:38:44.706664 waagent[1438]: 2024-02-08T23:38:44.706554Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:38:44.712747 waagent[1438]: 2024-02-08T23:38:44.712642Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.713864Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.714633Z INFO Daemon Daemon Using waagent for provisioning Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.715904Z INFO Daemon Daemon Activate resource disk Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.716548Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.724524Z INFO Daemon Daemon Found device: None Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.726174Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.726906Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.728525Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:38:44.735624 waagent[1438]: 2024-02-08T23:38:44.728946Z INFO Daemon Daemon Running default provisioning handler Feb 8 23:38:44.741101 waagent[1438]: 2024-02-08T23:38:44.740928Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Feb 8 23:38:44.747708 waagent[1438]: 2024-02-08T23:38:44.747600Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Feb 8 23:38:44.755517 waagent[1438]: 2024-02-08T23:38:44.748953Z INFO Daemon Daemon cloud-init is enabled: False Feb 8 23:38:44.755517 waagent[1438]: 2024-02-08T23:38:44.749640Z INFO Daemon Daemon Copying ovf-env.xml Feb 8 23:38:44.787712 waagent[1438]: 2024-02-08T23:38:44.787593Z INFO Daemon Daemon Successfully mounted dvd Feb 8 23:38:44.901125 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Feb 8 23:38:44.935109 waagent[1438]: 2024-02-08T23:38:44.934875Z INFO Daemon Daemon Detect protocol endpoint Feb 8 23:38:44.938439 waagent[1438]: 2024-02-08T23:38:44.938346Z INFO Daemon Daemon Clean protocol and wireserver endpoint Feb 8 23:38:44.941947 waagent[1438]: 2024-02-08T23:38:44.941865Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Feb 8 23:38:44.944931 waagent[1438]: 2024-02-08T23:38:44.944869Z INFO Daemon Daemon Test for route to 168.63.129.16 Feb 8 23:38:44.947545 waagent[1438]: 2024-02-08T23:38:44.947480Z INFO Daemon Daemon Route to 168.63.129.16 exists Feb 8 23:38:44.950183 waagent[1438]: 2024-02-08T23:38:44.950123Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Feb 8 23:38:45.064872 waagent[1438]: 2024-02-08T23:38:45.064789Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Feb 8 23:38:45.071869 waagent[1438]: 2024-02-08T23:38:45.066664Z INFO Daemon Daemon Wire protocol version:2012-11-30 Feb 8 23:38:45.071869 waagent[1438]: 2024-02-08T23:38:45.067244Z INFO Daemon Daemon Server preferred version:2015-04-05 Feb 8 23:38:45.312798 waagent[1438]: 2024-02-08T23:38:45.312595Z INFO Daemon Daemon Initializing goal state during protocol detection Feb 8 23:38:45.324975 waagent[1438]: 2024-02-08T23:38:45.324902Z INFO Daemon Daemon Forcing an update of the goal state.. Feb 8 23:38:45.327825 waagent[1438]: 2024-02-08T23:38:45.327762Z INFO Daemon Daemon Fetching goal state [incarnation 1] Feb 8 23:38:45.411391 waagent[1438]: 2024-02-08T23:38:45.411270Z INFO Daemon Daemon Found private key matching thumbprint 8F537A584C040D70F4BA20030D9975C1BFF109E7 Feb 8 23:38:45.422325 waagent[1438]: 2024-02-08T23:38:45.412849Z INFO Daemon Daemon Certificate with thumbprint 035A6247B8BE5A766554FF61CB2600A9E432E88D has no matching private key. Feb 8 23:38:45.422325 waagent[1438]: 2024-02-08T23:38:45.413890Z INFO Daemon Daemon Fetch goal state completed Feb 8 23:38:45.475971 waagent[1438]: 2024-02-08T23:38:45.475887Z INFO Daemon Daemon Fetched new vmSettings [correlation ID: 40d1ffcf-6d86-4761-bd7f-31149e239ac5 New eTag: 563450397089916450] Feb 8 23:38:45.484262 waagent[1438]: 2024-02-08T23:38:45.477827Z INFO Daemon Daemon Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:38:45.490893 waagent[1438]: 2024-02-08T23:38:45.490840Z INFO Daemon Daemon Starting provisioning Feb 8 23:38:45.497044 waagent[1438]: 2024-02-08T23:38:45.491996Z INFO Daemon Daemon Handle ovf-env.xml. Feb 8 23:38:45.497044 waagent[1438]: 2024-02-08T23:38:45.492898Z INFO Daemon Daemon Set hostname [ci-3510.3.2-a-3441531bae] Feb 8 23:38:45.513479 waagent[1438]: 2024-02-08T23:38:45.513380Z INFO Daemon Daemon Publish hostname [ci-3510.3.2-a-3441531bae] Feb 8 23:38:45.521046 waagent[1438]: 2024-02-08T23:38:45.514847Z INFO Daemon Daemon Examine /proc/net/route for primary interface Feb 8 23:38:45.521046 waagent[1438]: 2024-02-08T23:38:45.515687Z INFO Daemon Daemon Primary interface is [eth0] Feb 8 23:38:45.528974 systemd[1]: systemd-networkd-wait-online.service: Deactivated successfully. Feb 8 23:38:45.529227 systemd[1]: Stopped systemd-networkd-wait-online.service. Feb 8 23:38:45.529300 systemd[1]: Stopping systemd-networkd-wait-online.service... Feb 8 23:38:45.529632 systemd[1]: Stopping systemd-networkd.service... Feb 8 23:38:45.534056 systemd-networkd[1207]: eth0: DHCPv6 lease lost Feb 8 23:38:45.535590 systemd[1]: systemd-networkd.service: Deactivated successfully. Feb 8 23:38:45.535753 systemd[1]: Stopped systemd-networkd.service. Feb 8 23:38:45.538161 systemd[1]: Starting systemd-networkd.service... Feb 8 23:38:45.568614 systemd-networkd[1490]: enP18860s1: Link UP Feb 8 23:38:45.568624 systemd-networkd[1490]: enP18860s1: Gained carrier Feb 8 23:38:45.569999 systemd-networkd[1490]: eth0: Link UP Feb 8 23:38:45.570017 systemd-networkd[1490]: eth0: Gained carrier Feb 8 23:38:45.570447 systemd-networkd[1490]: lo: Link UP Feb 8 23:38:45.570456 systemd-networkd[1490]: lo: Gained carrier Feb 8 23:38:45.570759 systemd-networkd[1490]: eth0: Gained IPv6LL Feb 8 23:38:45.571299 systemd-networkd[1490]: Enumeration completed Feb 8 23:38:45.571393 systemd[1]: Started systemd-networkd.service. Feb 8 23:38:45.573617 systemd[1]: Starting systemd-networkd-wait-online.service... Feb 8 23:38:45.576346 systemd-networkd[1490]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Feb 8 23:38:45.580803 waagent[1438]: 2024-02-08T23:38:45.580664Z INFO Daemon Daemon Create user account if not exists Feb 8 23:38:45.590505 waagent[1438]: 2024-02-08T23:38:45.582524Z INFO Daemon Daemon User core already exists, skip useradd Feb 8 23:38:45.590505 waagent[1438]: 2024-02-08T23:38:45.583377Z INFO Daemon Daemon Configure sudoer Feb 8 23:38:45.590505 waagent[1438]: 2024-02-08T23:38:45.584728Z INFO Daemon Daemon Configure sshd Feb 8 23:38:45.590505 waagent[1438]: 2024-02-08T23:38:45.585332Z INFO Daemon Daemon Deploy ssh public key. Feb 8 23:38:45.620096 systemd-networkd[1490]: eth0: DHCPv4 address 10.200.8.12/24, gateway 10.200.8.1 acquired from 168.63.129.16 Feb 8 23:38:45.631136 waagent[1438]: 2024-02-08T23:38:45.626187Z INFO Daemon Daemon Decode custom data Feb 8 23:38:45.631136 waagent[1438]: 2024-02-08T23:38:45.627860Z INFO Daemon Daemon Save custom data Feb 8 23:38:45.631701 systemd[1]: Finished systemd-networkd-wait-online.service. Feb 8 23:38:46.876535 waagent[1438]: 2024-02-08T23:38:46.876432Z INFO Daemon Daemon Provisioning complete Feb 8 23:38:46.890337 waagent[1438]: 2024-02-08T23:38:46.890259Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Feb 8 23:38:46.896964 waagent[1438]: 2024-02-08T23:38:46.891639Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Feb 8 23:38:46.896964 waagent[1438]: 2024-02-08T23:38:46.893525Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.6.0.2 is the most current agent Feb 8 23:38:47.158853 waagent[1499]: 2024-02-08T23:38:47.158687Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 is running as the goal state agent Feb 8 23:38:47.159817 waagent[1499]: 2024-02-08T23:38:47.159738Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:47.159971 waagent[1499]: 2024-02-08T23:38:47.159913Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:47.170778 waagent[1499]: 2024-02-08T23:38:47.170696Z INFO ExtHandler ExtHandler Forcing an update of the goal state.. Feb 8 23:38:47.170953 waagent[1499]: 2024-02-08T23:38:47.170895Z INFO ExtHandler ExtHandler Fetching goal state [incarnation 1] Feb 8 23:38:47.231546 waagent[1499]: 2024-02-08T23:38:47.231419Z INFO ExtHandler ExtHandler Found private key matching thumbprint 8F537A584C040D70F4BA20030D9975C1BFF109E7 Feb 8 23:38:47.231774 waagent[1499]: 2024-02-08T23:38:47.231710Z INFO ExtHandler ExtHandler Certificate with thumbprint 035A6247B8BE5A766554FF61CB2600A9E432E88D has no matching private key. Feb 8 23:38:47.232026 waagent[1499]: 2024-02-08T23:38:47.231960Z INFO ExtHandler ExtHandler Fetch goal state completed Feb 8 23:38:47.246128 waagent[1499]: 2024-02-08T23:38:47.246071Z INFO ExtHandler ExtHandler Fetched new vmSettings [correlation ID: 7685cd91-a655-49e6-8a58-eee0e39cdd37 New eTag: 563450397089916450] Feb 8 23:38:47.246684 waagent[1499]: 2024-02-08T23:38:47.246626Z INFO ExtHandler ExtHandler Status Blob type 'None' is not valid, assuming BlockBlob Feb 8 23:38:47.314330 waagent[1499]: 2024-02-08T23:38:47.314183Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:38:47.339812 waagent[1499]: 2024-02-08T23:38:47.339715Z INFO ExtHandler ExtHandler WALinuxAgent-2.6.0.2 running as process 1499 Feb 8 23:38:47.343274 waagent[1499]: 2024-02-08T23:38:47.343205Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:38:47.344521 waagent[1499]: 2024-02-08T23:38:47.344461Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:38:47.419243 waagent[1499]: 2024-02-08T23:38:47.419129Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:38:47.419593 waagent[1499]: 2024-02-08T23:38:47.419528Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:38:47.427404 waagent[1499]: 2024-02-08T23:38:47.427348Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:38:47.427845 waagent[1499]: 2024-02-08T23:38:47.427786Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:38:47.428886 waagent[1499]: 2024-02-08T23:38:47.428820Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [False], cgroups enabled [False], python supported: [True] Feb 8 23:38:47.430154 waagent[1499]: 2024-02-08T23:38:47.430095Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:38:47.430765 waagent[1499]: 2024-02-08T23:38:47.430707Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:38:47.431332 waagent[1499]: 2024-02-08T23:38:47.431271Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:38:47.431460 waagent[1499]: 2024-02-08T23:38:47.431382Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:47.431541 waagent[1499]: 2024-02-08T23:38:47.431489Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:38:47.432075 waagent[1499]: 2024-02-08T23:38:47.431982Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:47.432291 waagent[1499]: 2024-02-08T23:38:47.432236Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:47.432824 waagent[1499]: 2024-02-08T23:38:47.432767Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:38:47.432989 waagent[1499]: 2024-02-08T23:38:47.432922Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:47.433102 waagent[1499]: 2024-02-08T23:38:47.433040Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:38:47.434673 waagent[1499]: 2024-02-08T23:38:47.434611Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:38:47.434996 waagent[1499]: 2024-02-08T23:38:47.434944Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:38:47.435208 waagent[1499]: 2024-02-08T23:38:47.435159Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:38:47.436062 waagent[1499]: 2024-02-08T23:38:47.435981Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:38:47.436269 waagent[1499]: 2024-02-08T23:38:47.436216Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:38:47.438141 waagent[1499]: 2024-02-08T23:38:47.438075Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:38:47.438141 waagent[1499]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:38:47.438141 waagent[1499]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:38:47.438141 waagent[1499]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:38:47.438141 waagent[1499]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:47.438141 waagent[1499]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:47.438141 waagent[1499]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:47.455501 waagent[1499]: 2024-02-08T23:38:47.455237Z INFO ExtHandler ExtHandler Checking for agent updates (family: Prod) Feb 8 23:38:47.458178 waagent[1499]: 2024-02-08T23:38:47.458120Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:38:47.459824 waagent[1499]: 2024-02-08T23:38:47.459763Z INFO ExtHandler ExtHandler [PERIODIC] Request failed using the direct channel. Error: 'NoneType' object has no attribute 'getheaders' Feb 8 23:38:47.490946 waagent[1499]: 2024-02-08T23:38:47.490845Z ERROR EnvHandler ExtHandler Failed to get the PID of the DHCP client: invalid literal for int() with base 10: 'MainPID=1490' Feb 8 23:38:47.505432 waagent[1499]: 2024-02-08T23:38:47.505350Z INFO ExtHandler ExtHandler Default channel changed to HostGA channel. Feb 8 23:38:47.599485 waagent[1499]: 2024-02-08T23:38:47.599370Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:38:47.599485 waagent[1499]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:38:47.599485 waagent[1499]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:38:47.599485 waagent[1499]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:2d:15 brd ff:ff:ff:ff:ff:ff Feb 8 23:38:47.599485 waagent[1499]: 3: enP18860s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:2d:15 brd ff:ff:ff:ff:ff:ff\ altname enP18860p0s2 Feb 8 23:38:47.599485 waagent[1499]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:38:47.599485 waagent[1499]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:38:47.599485 waagent[1499]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:38:47.599485 waagent[1499]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:38:47.599485 waagent[1499]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:38:47.599485 waagent[1499]: 2: eth0 inet6 fe80::222:48ff:fe9b:2d15/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:38:47.845678 waagent[1499]: 2024-02-08T23:38:47.845538Z INFO ExtHandler ExtHandler Agent WALinuxAgent-2.6.0.2 discovered update WALinuxAgent-2.9.1.1 -- exiting Feb 8 23:38:47.897286 waagent[1438]: 2024-02-08T23:38:47.897150Z INFO Daemon Daemon Agent WALinuxAgent-2.6.0.2 launched with command '/usr/share/oem/python/bin/python -u /usr/share/oem/bin/waagent -run-exthandlers' is successfully running Feb 8 23:38:47.902245 waagent[1438]: 2024-02-08T23:38:47.902180Z INFO Daemon Daemon Determined Agent WALinuxAgent-2.9.1.1 to be the latest agent Feb 8 23:38:48.902410 waagent[1538]: 2024-02-08T23:38:48.902303Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Feb 8 23:38:48.903202 waagent[1538]: 2024-02-08T23:38:48.903054Z INFO ExtHandler ExtHandler OS: flatcar 3510.3.2 Feb 8 23:38:48.903325 waagent[1538]: 2024-02-08T23:38:48.903224Z INFO ExtHandler ExtHandler Python: 3.9.16 Feb 8 23:38:48.912817 waagent[1538]: 2024-02-08T23:38:48.912713Z INFO ExtHandler ExtHandler Distro: flatcar-3510.3.2; OSUtil: CoreOSUtil; AgentService: waagent; Python: 3.9.16; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Feb 8 23:38:48.913234 waagent[1538]: 2024-02-08T23:38:48.913175Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:48.913397 waagent[1538]: 2024-02-08T23:38:48.913347Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:48.924962 waagent[1538]: 2024-02-08T23:38:48.924888Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Feb 8 23:38:48.933454 waagent[1538]: 2024-02-08T23:38:48.933390Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.143 Feb 8 23:38:48.934343 waagent[1538]: 2024-02-08T23:38:48.934284Z INFO ExtHandler Feb 8 23:38:48.934489 waagent[1538]: 2024-02-08T23:38:48.934439Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: e8a09ed7-66e5-459f-a702-45ded739ef68 eTag: 563450397089916450 source: Fabric] Feb 8 23:38:48.935193 waagent[1538]: 2024-02-08T23:38:48.935135Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Feb 8 23:38:48.936257 waagent[1538]: 2024-02-08T23:38:48.936197Z INFO ExtHandler Feb 8 23:38:48.936390 waagent[1538]: 2024-02-08T23:38:48.936338Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Feb 8 23:38:48.943128 waagent[1538]: 2024-02-08T23:38:48.943076Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Feb 8 23:38:48.943549 waagent[1538]: 2024-02-08T23:38:48.943500Z WARNING ExtHandler ExtHandler Fetch failed: [HttpError] HTTPS is unavailable and required Feb 8 23:38:48.964538 waagent[1538]: 2024-02-08T23:38:48.964484Z INFO ExtHandler ExtHandler Default channel changed to HostGAPlugin channel. Feb 8 23:38:49.028591 waagent[1538]: 2024-02-08T23:38:49.028468Z INFO ExtHandler Downloaded certificate {'thumbprint': '8F537A584C040D70F4BA20030D9975C1BFF109E7', 'hasPrivateKey': True} Feb 8 23:38:49.029515 waagent[1538]: 2024-02-08T23:38:49.029454Z INFO ExtHandler Downloaded certificate {'thumbprint': '035A6247B8BE5A766554FF61CB2600A9E432E88D', 'hasPrivateKey': False} Feb 8 23:38:49.030450 waagent[1538]: 2024-02-08T23:38:49.030387Z INFO ExtHandler Fetch goal state completed Feb 8 23:38:49.051871 waagent[1538]: 2024-02-08T23:38:49.051803Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1538 Feb 8 23:38:49.055069 waagent[1538]: 2024-02-08T23:38:49.054990Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3510.3.2', '', 'Flatcar Container Linux by Kinvolk'] Feb 8 23:38:49.056528 waagent[1538]: 2024-02-08T23:38:49.056470Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Feb 8 23:38:49.061100 waagent[1538]: 2024-02-08T23:38:49.061046Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Feb 8 23:38:49.061445 waagent[1538]: 2024-02-08T23:38:49.061388Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Feb 8 23:38:49.068984 waagent[1538]: 2024-02-08T23:38:49.068932Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Feb 8 23:38:49.069446 waagent[1538]: 2024-02-08T23:38:49.069386Z ERROR ExtHandler ExtHandler Unable to setup the persistent firewall rules: [Errno 30] Read-only file system: '/lib/systemd/system/waagent-network-setup.service' Feb 8 23:38:49.075154 waagent[1538]: 2024-02-08T23:38:49.075060Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Feb 8 23:38:49.079731 waagent[1538]: 2024-02-08T23:38:49.079672Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Feb 8 23:38:49.081088 waagent[1538]: 2024-02-08T23:38:49.081028Z INFO ExtHandler ExtHandler Starting env monitor service. Feb 8 23:38:49.081659 waagent[1538]: 2024-02-08T23:38:49.081603Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:49.082085 waagent[1538]: 2024-02-08T23:38:49.082028Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Feb 8 23:38:49.082436 waagent[1538]: 2024-02-08T23:38:49.082383Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:49.082838 waagent[1538]: 2024-02-08T23:38:49.082783Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Feb 8 23:38:49.082990 waagent[1538]: 2024-02-08T23:38:49.082944Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Feb 8 23:38:49.083398 waagent[1538]: 2024-02-08T23:38:49.083342Z INFO EnvHandler ExtHandler Configure routes Feb 8 23:38:49.084242 waagent[1538]: 2024-02-08T23:38:49.084185Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Feb 8 23:38:49.084429 waagent[1538]: 2024-02-08T23:38:49.084346Z INFO EnvHandler ExtHandler Gateway:None Feb 8 23:38:49.084772 waagent[1538]: 2024-02-08T23:38:49.084721Z INFO EnvHandler ExtHandler Routes:None Feb 8 23:38:49.084991 waagent[1538]: 2024-02-08T23:38:49.084936Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Feb 8 23:38:49.085259 waagent[1538]: 2024-02-08T23:38:49.085208Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Feb 8 23:38:49.085415 waagent[1538]: 2024-02-08T23:38:49.085345Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Feb 8 23:38:49.085415 waagent[1538]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Feb 8 23:38:49.085415 waagent[1538]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Feb 8 23:38:49.085415 waagent[1538]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Feb 8 23:38:49.085415 waagent[1538]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:49.085415 waagent[1538]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:49.085415 waagent[1538]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Feb 8 23:38:49.092267 waagent[1538]: 2024-02-08T23:38:49.092139Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Feb 8 23:38:49.094790 waagent[1538]: 2024-02-08T23:38:49.091697Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Feb 8 23:38:49.099140 waagent[1538]: 2024-02-08T23:38:49.099057Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Feb 8 23:38:49.117144 waagent[1538]: 2024-02-08T23:38:49.117060Z INFO MonitorHandler ExtHandler Network interfaces: Feb 8 23:38:49.117144 waagent[1538]: Executing ['ip', '-a', '-o', 'link']: Feb 8 23:38:49.117144 waagent[1538]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Feb 8 23:38:49.117144 waagent[1538]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:2d:15 brd ff:ff:ff:ff:ff:ff Feb 8 23:38:49.117144 waagent[1538]: 3: enP18860s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:22:48:9b:2d:15 brd ff:ff:ff:ff:ff:ff\ altname enP18860p0s2 Feb 8 23:38:49.117144 waagent[1538]: Executing ['ip', '-4', '-a', '-o', 'address']: Feb 8 23:38:49.117144 waagent[1538]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Feb 8 23:38:49.117144 waagent[1538]: 2: eth0 inet 10.200.8.12/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Feb 8 23:38:49.117144 waagent[1538]: Executing ['ip', '-6', '-a', '-o', 'address']: Feb 8 23:38:49.117144 waagent[1538]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Feb 8 23:38:49.117144 waagent[1538]: 2: eth0 inet6 fe80::222:48ff:fe9b:2d15/64 scope link \ valid_lft forever preferred_lft forever Feb 8 23:38:49.117878 waagent[1538]: 2024-02-08T23:38:49.117819Z INFO ExtHandler ExtHandler No requested version specified, checking for all versions for agent update (family: Prod) Feb 8 23:38:49.121303 waagent[1538]: 2024-02-08T23:38:49.121175Z INFO ExtHandler ExtHandler Downloading manifest Feb 8 23:38:49.163639 waagent[1538]: 2024-02-08T23:38:49.163536Z INFO ExtHandler ExtHandler Feb 8 23:38:49.163748 waagent[1538]: 2024-02-08T23:38:49.163682Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 27dc9c82-657c-42f4-84c5-6921aa122067 correlation 3deda191-bc9c-4061-a6a3-b006985dde5a created: 2024-02-08T23:37:15.721474Z] Feb 8 23:38:49.164518 waagent[1538]: 2024-02-08T23:38:49.164457Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Feb 8 23:38:49.166313 waagent[1538]: 2024-02-08T23:38:49.166258Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Feb 8 23:38:49.196094 waagent[1538]: 2024-02-08T23:38:49.196033Z INFO ExtHandler ExtHandler Looking for existing remote access users. Feb 8 23:38:49.220403 waagent[1538]: 2024-02-08T23:38:49.220279Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 9E57ECEB-D486-40C6-9136-57FFEB9A9AE7;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 1] Feb 8 23:38:49.221793 waagent[1538]: 2024-02-08T23:38:49.221726Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Feb 8 23:38:49.221793 waagent[1538]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:49.221793 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.221793 waagent[1538]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:49.221793 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.221793 waagent[1538]: Chain OUTPUT (policy ACCEPT 3 packets, 348 bytes) Feb 8 23:38:49.221793 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.221793 waagent[1538]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:38:49.221793 waagent[1538]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:38:49.221793 waagent[1538]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:38:49.229336 waagent[1538]: 2024-02-08T23:38:49.229276Z INFO EnvHandler ExtHandler Current Firewall rules: Feb 8 23:38:49.229336 waagent[1538]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:49.229336 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.229336 waagent[1538]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Feb 8 23:38:49.229336 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.229336 waagent[1538]: Chain OUTPUT (policy ACCEPT 4 packets, 400 bytes) Feb 8 23:38:49.229336 waagent[1538]: pkts bytes target prot opt in out source destination Feb 8 23:38:49.229336 waagent[1538]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Feb 8 23:38:49.229336 waagent[1538]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Feb 8 23:38:49.229336 waagent[1538]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Feb 8 23:38:49.229840 waagent[1538]: 2024-02-08T23:38:49.229783Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Feb 8 23:39:13.510678 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Feb 8 23:39:21.160414 update_engine[1332]: I0208 23:39:21.160326 1332 update_attempter.cc:509] Updating boot flags... Feb 8 23:39:24.105427 systemd[1]: Created slice system-sshd.slice. Feb 8 23:39:24.107273 systemd[1]: Started sshd@0-10.200.8.12:22-10.200.12.6:53652.service. Feb 8 23:39:24.917421 sshd[1649]: Accepted publickey for core from 10.200.12.6 port 53652 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:24.919125 sshd[1649]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:24.924958 systemd[1]: Started session-3.scope. Feb 8 23:39:24.925541 systemd-logind[1329]: New session 3 of user core. Feb 8 23:39:25.453368 systemd[1]: Started sshd@1-10.200.8.12:22-10.200.12.6:53658.service. Feb 8 23:39:26.071721 sshd[1657]: Accepted publickey for core from 10.200.12.6 port 53658 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:26.073387 sshd[1657]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:26.079060 systemd[1]: Started session-4.scope. Feb 8 23:39:26.079777 systemd-logind[1329]: New session 4 of user core. Feb 8 23:39:26.510905 sshd[1657]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:26.514466 systemd[1]: sshd@1-10.200.8.12:22-10.200.12.6:53658.service: Deactivated successfully. Feb 8 23:39:26.515493 systemd[1]: session-4.scope: Deactivated successfully. Feb 8 23:39:26.516268 systemd-logind[1329]: Session 4 logged out. Waiting for processes to exit. Feb 8 23:39:26.517312 systemd-logind[1329]: Removed session 4. Feb 8 23:39:26.614255 systemd[1]: Started sshd@2-10.200.8.12:22-10.200.12.6:53666.service. Feb 8 23:39:27.227834 sshd[1663]: Accepted publickey for core from 10.200.12.6 port 53666 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:27.229484 sshd[1663]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:27.234907 systemd[1]: Started session-5.scope. Feb 8 23:39:27.235505 systemd-logind[1329]: New session 5 of user core. Feb 8 23:39:27.659930 sshd[1663]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:27.663076 systemd[1]: sshd@2-10.200.8.12:22-10.200.12.6:53666.service: Deactivated successfully. Feb 8 23:39:27.663898 systemd[1]: session-5.scope: Deactivated successfully. Feb 8 23:39:27.664533 systemd-logind[1329]: Session 5 logged out. Waiting for processes to exit. Feb 8 23:39:27.665264 systemd-logind[1329]: Removed session 5. Feb 8 23:39:27.764491 systemd[1]: Started sshd@3-10.200.8.12:22-10.200.12.6:57036.service. Feb 8 23:39:28.390843 sshd[1669]: Accepted publickey for core from 10.200.12.6 port 57036 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:28.392489 sshd[1669]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:28.397892 systemd[1]: Started session-6.scope. Feb 8 23:39:28.398492 systemd-logind[1329]: New session 6 of user core. Feb 8 23:39:28.839826 sshd[1669]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:28.843040 systemd[1]: sshd@3-10.200.8.12:22-10.200.12.6:57036.service: Deactivated successfully. Feb 8 23:39:28.844031 systemd[1]: session-6.scope: Deactivated successfully. Feb 8 23:39:28.844650 systemd-logind[1329]: Session 6 logged out. Waiting for processes to exit. Feb 8 23:39:28.845394 systemd-logind[1329]: Removed session 6. Feb 8 23:39:28.945253 systemd[1]: Started sshd@4-10.200.8.12:22-10.200.12.6:57042.service. Feb 8 23:39:29.564485 sshd[1675]: Accepted publickey for core from 10.200.12.6 port 57042 ssh2: RSA SHA256:Vtz0AtN2VqRQ3qA8CVu2zfXoNYd7gcLOgJjAg5IfjKo Feb 8 23:39:29.566112 sshd[1675]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Feb 8 23:39:29.570836 systemd[1]: Started session-7.scope. Feb 8 23:39:29.571311 systemd-logind[1329]: New session 7 of user core. Feb 8 23:39:30.198348 sudo[1678]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Feb 8 23:39:30.198613 sudo[1678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Feb 8 23:39:30.895861 systemd[1]: Reloading. Feb 8 23:39:30.980292 /usr/lib/systemd/system-generators/torcx-generator[1707]: time="2024-02-08T23:39:30Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:39:30.989427 /usr/lib/systemd/system-generators/torcx-generator[1707]: time="2024-02-08T23:39:30Z" level=info msg="torcx already run" Feb 8 23:39:31.065732 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:39:31.065752 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:39:31.083552 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:39:31.166059 systemd[1]: Started kubelet.service. Feb 8 23:39:31.194686 systemd[1]: Starting coreos-metadata.service... Feb 8 23:39:31.244727 kubelet[1769]: E0208 23:39:31.244661 1769 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Feb 8 23:39:31.246652 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Feb 8 23:39:31.246763 systemd[1]: kubelet.service: Failed with result 'exit-code'. Feb 8 23:39:31.254982 coreos-metadata[1777]: Feb 08 23:39:31.254 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Feb 8 23:39:31.257575 coreos-metadata[1777]: Feb 08 23:39:31.257 INFO Fetch successful Feb 8 23:39:31.260225 coreos-metadata[1777]: Feb 08 23:39:31.260 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Feb 8 23:39:31.261910 coreos-metadata[1777]: Feb 08 23:39:31.261 INFO Fetch successful Feb 8 23:39:31.262269 coreos-metadata[1777]: Feb 08 23:39:31.262 INFO Fetching http://168.63.129.16/machine/0db230fa-af6b-48c2-a76a-4725f5bdac0d/a809d0d3%2De839%2D46e7%2D90a5%2Dae7c97f1addc.%5Fci%2D3510.3.2%2Da%2D3441531bae?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Feb 8 23:39:31.266692 coreos-metadata[1777]: Feb 08 23:39:31.266 INFO Fetch successful Feb 8 23:39:31.298643 coreos-metadata[1777]: Feb 08 23:39:31.298 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Feb 8 23:39:31.309064 coreos-metadata[1777]: Feb 08 23:39:31.308 INFO Fetch successful Feb 8 23:39:31.317469 systemd[1]: Finished coreos-metadata.service. Feb 8 23:39:34.622961 systemd[1]: Stopped kubelet.service. Feb 8 23:39:34.636666 systemd[1]: Reloading. Feb 8 23:39:34.706497 /usr/lib/systemd/system-generators/torcx-generator[1833]: time="2024-02-08T23:39:34Z" level=debug msg="common configuration parsed" base_dir=/var/lib/torcx/ conf_dir=/etc/torcx/ run_dir=/run/torcx/ store_paths="[/usr/share/torcx/store /usr/share/oem/torcx/store/3510.3.2 /usr/share/oem/torcx/store /var/lib/torcx/store/3510.3.2 /var/lib/torcx/store]" Feb 8 23:39:34.706534 /usr/lib/systemd/system-generators/torcx-generator[1833]: time="2024-02-08T23:39:34Z" level=info msg="torcx already run" Feb 8 23:39:34.795254 systemd[1]: /usr/lib/systemd/system/locksmithd.service:8: Unit uses CPUShares=; please use CPUWeight= instead. Support for CPUShares= will be removed soon. Feb 8 23:39:34.795276 systemd[1]: /usr/lib/systemd/system/locksmithd.service:9: Unit uses MemoryLimit=; please use MemoryMax= instead. Support for MemoryLimit= will be removed soon. Feb 8 23:39:34.812981 systemd[1]: /run/systemd/system/docker.socket:8: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Feb 8 23:39:34.902088 systemd[1]: Started kubelet.service. Feb 8 23:39:34.944924 kubelet[1896]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:39:34.944924 kubelet[1896]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Feb 8 23:39:34.944924 kubelet[1896]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Feb 8 23:39:34.945405 kubelet[1896]: I0208 23:39:34.944997 1896 server.go:203] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Feb 8 23:39:35.157529 kubelet[1896]: I0208 23:39:35.157061 1896 server.go:467] "Kubelet version" kubeletVersion="v1.28.1" Feb 8 23:39:35.157529 kubelet[1896]: I0208 23:39:35.157087 1896 server.go:469] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Feb 8 23:39:35.157529 kubelet[1896]: I0208 23:39:35.157380 1896 server.go:895] "Client rotation is on, will bootstrap in background" Feb 8 23:39:35.159618 kubelet[1896]: I0208 23:39:35.159589 1896 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Feb 8 23:39:35.165093 kubelet[1896]: I0208 23:39:35.165071 1896 server.go:725] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Feb 8 23:39:35.165343 kubelet[1896]: I0208 23:39:35.165326 1896 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Feb 8 23:39:35.165607 kubelet[1896]: I0208 23:39:35.165563 1896 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Feb 8 23:39:35.165795 kubelet[1896]: I0208 23:39:35.165617 1896 topology_manager.go:138] "Creating topology manager with none policy" Feb 8 23:39:35.165795 kubelet[1896]: I0208 23:39:35.165630 1896 container_manager_linux.go:301] "Creating device plugin manager" Feb 8 23:39:35.165795 kubelet[1896]: I0208 23:39:35.165749 1896 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:39:35.165958 kubelet[1896]: I0208 23:39:35.165844 1896 kubelet.go:393] "Attempting to sync node with API server" Feb 8 23:39:35.165958 kubelet[1896]: I0208 23:39:35.165861 1896 kubelet.go:298] "Adding static pod path" path="/etc/kubernetes/manifests" Feb 8 23:39:35.165958 kubelet[1896]: I0208 23:39:35.165908 1896 kubelet.go:309] "Adding apiserver pod source" Feb 8 23:39:35.165958 kubelet[1896]: I0208 23:39:35.165929 1896 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Feb 8 23:39:35.166529 kubelet[1896]: I0208 23:39:35.166506 1896 kuberuntime_manager.go:257] "Container runtime initialized" containerRuntime="containerd" version="1.6.16" apiVersion="v1" Feb 8 23:39:35.166694 kubelet[1896]: E0208 23:39:35.166677 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:35.166998 kubelet[1896]: W0208 23:39:35.166979 1896 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Feb 8 23:39:35.167219 kubelet[1896]: E0208 23:39:35.166575 1896 file.go:98] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:35.168358 kubelet[1896]: I0208 23:39:35.168341 1896 server.go:1232] "Started kubelet" Feb 8 23:39:35.169529 kubelet[1896]: E0208 23:39:35.169504 1896 cri_stats_provider.go:448] "Failed to get the info of the filesystem with mountpoint" err="unable to find data in memory cache" mountpoint="/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs" Feb 8 23:39:35.169611 kubelet[1896]: E0208 23:39:35.169534 1896 kubelet.go:1431] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Feb 8 23:39:35.175259 kernel: SELinux: Context system_u:object_r:container_file_t:s0 is not valid (left unmapped). Feb 8 23:39:35.175372 kubelet[1896]: I0208 23:39:35.170769 1896 ratelimit.go:65] "Setting rate limiting for podresources endpoint" qps=100 burstTokens=10 Feb 8 23:39:35.175372 kubelet[1896]: I0208 23:39:35.170981 1896 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Feb 8 23:39:35.175372 kubelet[1896]: I0208 23:39:35.171036 1896 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Feb 8 23:39:35.175372 kubelet[1896]: I0208 23:39:35.171605 1896 server.go:462] "Adding debug handlers to kubelet server" Feb 8 23:39:35.175581 kubelet[1896]: I0208 23:39:35.175567 1896 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Feb 8 23:39:35.176328 kubelet[1896]: I0208 23:39:35.176308 1896 volume_manager.go:291] "Starting Kubelet Volume Manager" Feb 8 23:39:35.178181 kubelet[1896]: I0208 23:39:35.178158 1896 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Feb 8 23:39:35.178258 kubelet[1896]: I0208 23:39:35.178220 1896 reconciler_new.go:29] "Reconciler: start to sync state" Feb 8 23:39:35.178472 kubelet[1896]: E0208 23:39:35.178452 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:35.183089 kubelet[1896]: W0208 23:39:35.183060 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:35.183170 kubelet[1896]: E0208 23:39:35.183107 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:35.183170 kubelet[1896]: W0208 23:39:35.183144 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:35.183170 kubelet[1896]: E0208 23:39:35.183155 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:anonymous" cannot list resource "services" in API group "" at the cluster scope Feb 8 23:39:35.183302 kubelet[1896]: E0208 23:39:35.183196 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf6c5b231", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"Starting", Message:"Starting kubelet.", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 168315953, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 168315953, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.183512 kubelet[1896]: E0208 23:39:35.183491 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.12\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="200ms" Feb 8 23:39:35.183619 kubelet[1896]: W0208 23:39:35.183554 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:35.183619 kubelet[1896]: E0208 23:39:35.183571 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:35.188541 kubelet[1896]: E0208 23:39:35.188474 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf6d81830", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"InvalidDiskCapacity", Message:"invalid capacity 0 on image filesystem", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 169521712, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 169521712, time.Local), Count:1, Type:"Warning", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.214794 kubelet[1896]: E0208 23:39:35.214722 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9755066", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213379686, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213379686, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.215127 kubelet[1896]: I0208 23:39:35.214987 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Feb 8 23:39:35.215294 kubelet[1896]: I0208 23:39:35.215279 1896 cpu_manager.go:214] "Starting CPU manager" policy="none" Feb 8 23:39:35.215383 kubelet[1896]: I0208 23:39:35.215373 1896 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Feb 8 23:39:35.215460 kubelet[1896]: I0208 23:39:35.215451 1896 state_mem.go:36] "Initialized new in-memory state store" Feb 8 23:39:35.216138 kubelet[1896]: I0208 23:39:35.216118 1896 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Feb 8 23:39:35.216138 kubelet[1896]: I0208 23:39:35.216142 1896 status_manager.go:217] "Starting to sync pod status with apiserver" Feb 8 23:39:35.216261 kubelet[1896]: I0208 23:39:35.216160 1896 kubelet.go:2303] "Starting kubelet main sync loop" Feb 8 23:39:35.216261 kubelet[1896]: E0208 23:39:35.216206 1896 kubelet.go:2327] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Feb 8 23:39:35.219794 kubelet[1896]: I0208 23:39:35.219768 1896 policy_none.go:49] "None policy: Start" Feb 8 23:39:35.220576 kubelet[1896]: E0208 23:39:35.220481 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9756968", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213386088, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213386088, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.221317 kubelet[1896]: I0208 23:39:35.221303 1896 memory_manager.go:169] "Starting memorymanager" policy="None" Feb 8 23:39:35.221433 kubelet[1896]: I0208 23:39:35.221422 1896 state_mem.go:35] "Initializing new in-memory state store" Feb 8 23:39:35.222176 kubelet[1896]: E0208 23:39:35.222115 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9757d56", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213391190, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213391190, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.222480 kubelet[1896]: W0208 23:39:35.222456 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:35.222543 kubelet[1896]: E0208 23:39:35.222485 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User "system:anonymous" cannot list resource "runtimeclasses" in API group "node.k8s.io" at the cluster scope Feb 8 23:39:35.228328 systemd[1]: Created slice kubepods.slice. Feb 8 23:39:35.232223 systemd[1]: Created slice kubepods-burstable.slice. Feb 8 23:39:35.234910 systemd[1]: Created slice kubepods-besteffort.slice. Feb 8 23:39:35.241707 kubelet[1896]: I0208 23:39:35.241684 1896 manager.go:471] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Feb 8 23:39:35.241875 kubelet[1896]: I0208 23:39:35.241857 1896 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Feb 8 23:39:35.243059 kubelet[1896]: E0208 23:39:35.243042 1896 eviction_manager.go:258] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"10.200.8.12\" not found" Feb 8 23:39:35.244514 kubelet[1896]: E0208 23:39:35.244456 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bfb43262d", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeAllocatableEnforced", Message:"Updated Node Allocatable limit across pods", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 243646509, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 243646509, time.Local), Count:1, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events is forbidden: User "system:anonymous" cannot create resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.279415 kubelet[1896]: I0208 23:39:35.279392 1896 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.12" Feb 8 23:39:35.280562 kubelet[1896]: E0208 23:39:35.280539 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.12" Feb 8 23:39:35.280873 kubelet[1896]: E0208 23:39:35.280781 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9755066", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213379686, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 279356654, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9755066" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.281734 kubelet[1896]: E0208 23:39:35.281665 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9756968", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213386088, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 279362556, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9756968" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.282556 kubelet[1896]: E0208 23:39:35.282503 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9757d56", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213391190, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 279366557, time.Local), Count:2, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9757d56" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.385919 kubelet[1896]: E0208 23:39:35.385883 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.12\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="400ms" Feb 8 23:39:35.482587 kubelet[1896]: I0208 23:39:35.482465 1896 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.12" Feb 8 23:39:35.484557 kubelet[1896]: E0208 23:39:35.484456 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9755066", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213379686, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 482418086, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9755066" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.485124 kubelet[1896]: E0208 23:39:35.485019 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.12" Feb 8 23:39:35.486669 kubelet[1896]: E0208 23:39:35.486513 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9756968", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213386088, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 482428589, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9756968" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.488130 kubelet[1896]: E0208 23:39:35.488065 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9757d56", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213391190, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 482433090, time.Local), Count:3, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9757d56" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.787322 kubelet[1896]: E0208 23:39:35.787203 1896 controller.go:146] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"10.200.8.12\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="800ms" Feb 8 23:39:35.886811 kubelet[1896]: I0208 23:39:35.886779 1896 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.12" Feb 8 23:39:35.888300 kubelet[1896]: E0208 23:39:35.888270 1896 kubelet_node_status.go:92] "Unable to register node with API server" err="nodes is forbidden: User \"system:anonymous\" cannot create resource \"nodes\" in API group \"\" at the cluster scope" node="10.200.8.12" Feb 8 23:39:35.888542 kubelet[1896]: E0208 23:39:35.888457 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9755066", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientMemory", Message:"Node 10.200.8.12 status is now: NodeHasSufficientMemory", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213379686, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 886731811, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9755066" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.889492 kubelet[1896]: E0208 23:39:35.889421 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9756968", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasNoDiskPressure", Message:"Node 10.200.8.12 status is now: NodeHasNoDiskPressure", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213386088, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 886744715, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9756968" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.890315 kubelet[1896]: E0208 23:39:35.890242 1896 event.go:280] Server rejected event '&v1.Event{TypeMeta:v1.TypeMeta{Kind:"", APIVersion:""}, ObjectMeta:v1.ObjectMeta{Name:"10.200.8.12.17b2079bf9757d56", GenerateName:"", Namespace:"default", SelfLink:"", UID:"", ResourceVersion:"", Generation:0, CreationTimestamp:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string(nil), Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, InvolvedObject:v1.ObjectReference{Kind:"Node", Namespace:"", Name:"10.200.8.12", UID:"10.200.8.12", APIVersion:"", ResourceVersion:"", FieldPath:""}, Reason:"NodeHasSufficientPID", Message:"Node 10.200.8.12 status is now: NodeHasSufficientPID", Source:v1.EventSource{Component:"kubelet", Host:"10.200.8.12"}, FirstTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 213391190, time.Local), LastTimestamp:time.Date(2024, time.February, 8, 23, 39, 35, 886749716, time.Local), Count:4, Type:"Normal", EventTime:time.Date(1, time.January, 1, 0, 0, 0, 0, time.UTC), Series:(*v1.EventSeries)(nil), Action:"", Related:(*v1.ObjectReference)(nil), ReportingController:"kubelet", ReportingInstance:"10.200.8.12"}': 'events "10.200.8.12.17b2079bf9757d56" is forbidden: User "system:anonymous" cannot patch resource "events" in API group "" in the namespace "default"' (will not retry!) Feb 8 23:39:35.994322 kubelet[1896]: W0208 23:39:35.994274 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes "10.200.8.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:35.994322 kubelet[1896]: E0208 23:39:35.994324 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes "10.200.8.12" is forbidden: User "system:anonymous" cannot list resource "nodes" in API group "" at the cluster scope Feb 8 23:39:36.067018 kubelet[1896]: W0208 23:39:36.066875 1896 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:36.067018 kubelet[1896]: E0208 23:39:36.066948 1896 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope Feb 8 23:39:36.159833 kubelet[1896]: I0208 23:39:36.159776 1896 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Feb 8 23:39:36.167018 kubelet[1896]: E0208 23:39:36.166983 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:36.532343 kubelet[1896]: E0208 23:39:36.532273 1896 csi_plugin.go:295] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: nodes "10.200.8.12" not found Feb 8 23:39:36.592848 kubelet[1896]: E0208 23:39:36.592749 1896 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"10.200.8.12\" not found" node="10.200.8.12" Feb 8 23:39:36.689610 kubelet[1896]: I0208 23:39:36.689568 1896 kubelet_node_status.go:70] "Attempting to register node" node="10.200.8.12" Feb 8 23:39:36.692941 kubelet[1896]: I0208 23:39:36.692912 1896 kubelet_node_status.go:73] "Successfully registered node" node="10.200.8.12" Feb 8 23:39:36.749674 kubelet[1896]: E0208 23:39:36.749636 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:36.849973 kubelet[1896]: E0208 23:39:36.849910 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:36.950171 kubelet[1896]: E0208 23:39:36.950115 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:36.954831 sudo[1678]: pam_unix(sudo:session): session closed for user root Feb 8 23:39:37.051066 kubelet[1896]: E0208 23:39:37.050952 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.071294 sshd[1675]: pam_unix(sshd:session): session closed for user core Feb 8 23:39:37.074459 systemd[1]: sshd@4-10.200.8.12:22-10.200.12.6:57042.service: Deactivated successfully. Feb 8 23:39:37.075372 systemd[1]: session-7.scope: Deactivated successfully. Feb 8 23:39:37.076058 systemd-logind[1329]: Session 7 logged out. Waiting for processes to exit. Feb 8 23:39:37.076867 systemd-logind[1329]: Removed session 7. Feb 8 23:39:37.152133 kubelet[1896]: E0208 23:39:37.151890 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.167504 kubelet[1896]: E0208 23:39:37.167459 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:37.253227 kubelet[1896]: E0208 23:39:37.253124 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.353925 kubelet[1896]: E0208 23:39:37.353878 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.454865 kubelet[1896]: E0208 23:39:37.454747 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.555659 kubelet[1896]: E0208 23:39:37.555605 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.656607 kubelet[1896]: E0208 23:39:37.656558 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.757510 kubelet[1896]: E0208 23:39:37.757379 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.858509 kubelet[1896]: E0208 23:39:37.858457 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:37.959398 kubelet[1896]: E0208 23:39:37.959304 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.060378 kubelet[1896]: E0208 23:39:38.060245 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.161343 kubelet[1896]: E0208 23:39:38.161296 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.168584 kubelet[1896]: E0208 23:39:38.168548 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:38.261969 kubelet[1896]: E0208 23:39:38.261888 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.362823 kubelet[1896]: E0208 23:39:38.362784 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.463803 kubelet[1896]: E0208 23:39:38.463723 1896 kubelet_node_status.go:458] "Error getting the current node from lister" err="node \"10.200.8.12\" not found" Feb 8 23:39:38.564536 kubelet[1896]: I0208 23:39:38.564505 1896 kuberuntime_manager.go:1463] "Updating runtime config through cri with podcidr" CIDR="192.168.1.0/24" Feb 8 23:39:38.565049 env[1342]: time="2024-02-08T23:39:38.564981220Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Feb 8 23:39:38.565504 kubelet[1896]: I0208 23:39:38.565269 1896 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.1.0/24" Feb 8 23:39:39.169098 kubelet[1896]: I0208 23:39:39.169045 1896 apiserver.go:52] "Watching apiserver" Feb 8 23:39:39.169575 kubelet[1896]: E0208 23:39:39.169063 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:39.171551 kubelet[1896]: I0208 23:39:39.171525 1896 topology_manager.go:215] "Topology Admit Handler" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" podNamespace="kube-system" podName="cilium-mgt7w" Feb 8 23:39:39.171739 kubelet[1896]: I0208 23:39:39.171717 1896 topology_manager.go:215] "Topology Admit Handler" podUID="e8c3ddd9-5ce1-471d-9029-9f424b4d00d4" podNamespace="kube-system" podName="kube-proxy-jmtb9" Feb 8 23:39:39.178733 kubelet[1896]: I0208 23:39:39.178708 1896 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Feb 8 23:39:39.179249 systemd[1]: Created slice kubepods-besteffort-pode8c3ddd9_5ce1_471d_9029_9f424b4d00d4.slice. Feb 8 23:39:39.190449 systemd[1]: Created slice kubepods-burstable-podd5958cc2_b53c_4a0c_8987_0bda6989b50c.slice. Feb 8 23:39:39.202808 kubelet[1896]: I0208 23:39:39.202777 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-etc-cni-netd\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.202917 kubelet[1896]: I0208 23:39:39.202815 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5958cc2-b53c-4a0c-8987-0bda6989b50c-clustermesh-secrets\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.202917 kubelet[1896]: I0208 23:39:39.202843 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-kernel\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.202917 kubelet[1896]: I0208 23:39:39.202870 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8c3ddd9-5ce1-471d-9029-9f424b4d00d4-xtables-lock\") pod \"kube-proxy-jmtb9\" (UID: \"e8c3ddd9-5ce1-471d-9029-9f424b4d00d4\") " pod="kube-system/kube-proxy-jmtb9" Feb 8 23:39:39.202917 kubelet[1896]: I0208 23:39:39.202896 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hostproc\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.202925 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hubble-tls\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.202953 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/e8c3ddd9-5ce1-471d-9029-9f424b4d00d4-kube-proxy\") pod \"kube-proxy-jmtb9\" (UID: \"e8c3ddd9-5ce1-471d-9029-9f424b4d00d4\") " pod="kube-system/kube-proxy-jmtb9" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.202981 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-cgroup\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.203018 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cni-path\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.203053 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-lib-modules\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203117 kubelet[1896]: I0208 23:39:39.203081 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-xtables-lock\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203109 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-config-path\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203137 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jdg9k\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-kube-api-access-jdg9k\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203168 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8c3ddd9-5ce1-471d-9029-9f424b4d00d4-lib-modules\") pod \"kube-proxy-jmtb9\" (UID: \"e8c3ddd9-5ce1-471d-9029-9f424b4d00d4\") " pod="kube-system/kube-proxy-jmtb9" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203196 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-run\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203234 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-bpf-maps\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203420 kubelet[1896]: I0208 23:39:39.203268 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-net\") pod \"cilium-mgt7w\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " pod="kube-system/cilium-mgt7w" Feb 8 23:39:39.203658 kubelet[1896]: I0208 23:39:39.203300 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k6tq4\" (UniqueName: \"kubernetes.io/projected/e8c3ddd9-5ce1-471d-9029-9f424b4d00d4-kube-api-access-k6tq4\") pod \"kube-proxy-jmtb9\" (UID: \"e8c3ddd9-5ce1-471d-9029-9f424b4d00d4\") " pod="kube-system/kube-proxy-jmtb9" Feb 8 23:39:39.489345 env[1342]: time="2024-02-08T23:39:39.489203311Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmtb9,Uid:e8c3ddd9-5ce1-471d-9029-9f424b4d00d4,Namespace:kube-system,Attempt:0,}" Feb 8 23:39:39.497001 env[1342]: time="2024-02-08T23:39:39.496966479Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgt7w,Uid:d5958cc2-b53c-4a0c-8987-0bda6989b50c,Namespace:kube-system,Attempt:0,}" Feb 8 23:39:40.169959 kubelet[1896]: E0208 23:39:40.169925 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:40.184198 env[1342]: time="2024-02-08T23:39:40.184152082Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.188987 env[1342]: time="2024-02-08T23:39:40.188951526Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.192593 env[1342]: time="2024-02-08T23:39:40.192559361Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.199387 env[1342]: time="2024-02-08T23:39:40.199357023Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.202251 env[1342]: time="2024-02-08T23:39:40.202219565Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:6270bb605e12e581514ada5fd5b3216f727db55dc87d5889c790e4c760683fee,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.205547 env[1342]: time="2024-02-08T23:39:40.205514119Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.213916 env[1342]: time="2024-02-08T23:39:40.213879888Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause:3.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.214441 env[1342]: time="2024-02-08T23:39:40.214414426Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/pause@sha256:3d380ca8864549e74af4b29c10f9cb0956236dfb01c40ca076fb6c37253234db,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269435389Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269487002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269502606Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269644143Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/6216863faf2f970548b8405df47b91ea324baf6be9a727d17d47d1ca11806951 pid=1947 runtime=io.containerd.runc.v2 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269865300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269889406Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.269897908Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:39:40.273852 env[1342]: time="2024-02-08T23:39:40.270050348Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff pid=1939 runtime=io.containerd.runc.v2 Feb 8 23:39:40.301325 systemd[1]: Started cri-containerd-848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff.scope. Feb 8 23:39:40.322310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4162037081.mount: Deactivated successfully. Feb 8 23:39:40.331670 systemd[1]: Started cri-containerd-6216863faf2f970548b8405df47b91ea324baf6be9a727d17d47d1ca11806951.scope. Feb 8 23:39:40.356349 env[1342]: time="2024-02-08T23:39:40.356302006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-mgt7w,Uid:d5958cc2-b53c-4a0c-8987-0bda6989b50c,Namespace:kube-system,Attempt:0,} returns sandbox id \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\"" Feb 8 23:39:40.359275 env[1342]: time="2024-02-08T23:39:40.359241467Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\"" Feb 8 23:39:40.363678 env[1342]: time="2024-02-08T23:39:40.363649210Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-jmtb9,Uid:e8c3ddd9-5ce1-471d-9029-9f424b4d00d4,Namespace:kube-system,Attempt:0,} returns sandbox id \"6216863faf2f970548b8405df47b91ea324baf6be9a727d17d47d1ca11806951\"" Feb 8 23:39:41.170184 kubelet[1896]: E0208 23:39:41.170125 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:42.171000 kubelet[1896]: E0208 23:39:42.170933 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:43.171856 kubelet[1896]: E0208 23:39:43.171791 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:44.172110 kubelet[1896]: E0208 23:39:44.172041 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:45.172203 kubelet[1896]: E0208 23:39:45.172163 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:45.319860 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3954172389.mount: Deactivated successfully. Feb 8 23:39:46.172841 kubelet[1896]: E0208 23:39:46.172780 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:47.173111 kubelet[1896]: E0208 23:39:47.173069 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:47.955464 env[1342]: time="2024-02-08T23:39:47.955423254Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:47.965349 env[1342]: time="2024-02-08T23:39:47.965242457Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:47.970315 env[1342]: time="2024-02-08T23:39:47.970279936Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/cilium@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:47.970755 env[1342]: time="2024-02-08T23:39:47.970722530Z" level=info msg="PullImage \"quay.io/cilium/cilium:v1.12.5@sha256:06ce2b0a0a472e73334a7504ee5c5d8b2e2d7b72ef728ad94e564740dd505be5\" returns image reference \"sha256:3e35b3e9f295e7748482d40ed499b0ff7961f1f128d479d8e6682b3245bba69b\"" Feb 8 23:39:47.972739 env[1342]: time="2024-02-08T23:39:47.972223652Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\"" Feb 8 23:39:47.973066 env[1342]: time="2024-02-08T23:39:47.973035326Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:39:48.014837 env[1342]: time="2024-02-08T23:39:48.014744181Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\"" Feb 8 23:39:48.016301 env[1342]: time="2024-02-08T23:39:48.016273900Z" level=info msg="StartContainer for \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\"" Feb 8 23:39:48.034366 systemd[1]: Started cri-containerd-564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6.scope. Feb 8 23:39:48.074420 env[1342]: time="2024-02-08T23:39:48.074379715Z" level=info msg="StartContainer for \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\" returns successfully" Feb 8 23:39:48.079156 systemd[1]: cri-containerd-564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6.scope: Deactivated successfully. Feb 8 23:39:48.174195 kubelet[1896]: E0208 23:39:48.174139 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:48.998199 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6-rootfs.mount: Deactivated successfully. Feb 8 23:39:49.175216 kubelet[1896]: E0208 23:39:49.175169 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:50.176022 kubelet[1896]: E0208 23:39:50.175963 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:51.176908 kubelet[1896]: E0208 23:39:51.176871 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:51.784089 env[1342]: time="2024-02-08T23:39:51.784037896Z" level=info msg="shim disconnected" id=564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6 Feb 8 23:39:51.784089 env[1342]: time="2024-02-08T23:39:51.784083705Z" level=warning msg="cleaning up after shim disconnected" id=564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6 namespace=k8s.io Feb 8 23:39:51.784625 env[1342]: time="2024-02-08T23:39:51.784111110Z" level=info msg="cleaning up dead shim" Feb 8 23:39:51.793668 env[1342]: time="2024-02-08T23:39:51.793630843Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:39:51Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2066 runtime=io.containerd.runc.v2\n" Feb 8 23:39:52.177699 kubelet[1896]: E0208 23:39:52.177643 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:52.257663 env[1342]: time="2024-02-08T23:39:52.257617918Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:39:52.322591 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707229226.mount: Deactivated successfully. Feb 8 23:39:52.329639 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3870399504.mount: Deactivated successfully. Feb 8 23:39:52.344069 env[1342]: time="2024-02-08T23:39:52.344028628Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\"" Feb 8 23:39:52.344779 env[1342]: time="2024-02-08T23:39:52.344747763Z" level=info msg="StartContainer for \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\"" Feb 8 23:39:52.366519 systemd[1]: Started cri-containerd-271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137.scope. Feb 8 23:39:52.412055 env[1342]: time="2024-02-08T23:39:52.411993778Z" level=info msg="StartContainer for \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\" returns successfully" Feb 8 23:39:52.415794 systemd[1]: systemd-sysctl.service: Deactivated successfully. Feb 8 23:39:52.417357 systemd[1]: Stopped systemd-sysctl.service. Feb 8 23:39:52.417517 systemd[1]: Stopping systemd-sysctl.service... Feb 8 23:39:52.419799 systemd[1]: Starting systemd-sysctl.service... Feb 8 23:39:52.424871 systemd[1]: cri-containerd-271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137.scope: Deactivated successfully. Feb 8 23:39:52.432383 systemd[1]: Finished systemd-sysctl.service. Feb 8 23:39:52.576196 env[1342]: time="2024-02-08T23:39:52.576141172Z" level=info msg="shim disconnected" id=271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137 Feb 8 23:39:52.576196 env[1342]: time="2024-02-08T23:39:52.576192382Z" level=warning msg="cleaning up after shim disconnected" id=271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137 namespace=k8s.io Feb 8 23:39:52.576196 env[1342]: time="2024-02-08T23:39:52.576203484Z" level=info msg="cleaning up dead shim" Feb 8 23:39:52.592808 env[1342]: time="2024-02-08T23:39:52.592769392Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:39:52Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2132 runtime=io.containerd.runc.v2\n" Feb 8 23:39:53.146447 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1185318711.mount: Deactivated successfully. Feb 8 23:39:53.178236 kubelet[1896]: E0208 23:39:53.178178 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:53.248239 env[1342]: time="2024-02-08T23:39:53.248189355Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:53.253017 env[1342]: time="2024-02-08T23:39:53.252965127Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:53.256243 env[1342]: time="2024-02-08T23:39:53.256210621Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/kube-proxy:v1.28.6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:53.258762 env[1342]: time="2024-02-08T23:39:53.258733182Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/kube-proxy@sha256:3898a1671ae42be1cd3c2e777549bc7b5b306b8da3a224b747365f6679fb902a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:39:53.259060 env[1342]: time="2024-02-08T23:39:53.259031136Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.28.6\" returns image reference \"sha256:342a759d88156b4f56ba522a1aed0e3d32d72542545346b40877f6583bebe05f\"" Feb 8 23:39:53.261058 env[1342]: time="2024-02-08T23:39:53.261026701Z" level=info msg="CreateContainer within sandbox \"6216863faf2f970548b8405df47b91ea324baf6be9a727d17d47d1ca11806951\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Feb 8 23:39:53.262671 env[1342]: time="2024-02-08T23:39:53.262641696Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:39:53.302568 env[1342]: time="2024-02-08T23:39:53.302530587Z" level=info msg="CreateContainer within sandbox \"6216863faf2f970548b8405df47b91ea324baf6be9a727d17d47d1ca11806951\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e40648a54dfac547d03cef2bcac35dc2ae3cc76b8f8716d928ad622ebaca606e\"" Feb 8 23:39:53.303096 env[1342]: time="2024-02-08T23:39:53.303064384Z" level=info msg="StartContainer for \"e40648a54dfac547d03cef2bcac35dc2ae3cc76b8f8716d928ad622ebaca606e\"" Feb 8 23:39:53.316081 env[1342]: time="2024-02-08T23:39:53.316042456Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\"" Feb 8 23:39:53.316879 env[1342]: time="2024-02-08T23:39:53.316846703Z" level=info msg="StartContainer for \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\"" Feb 8 23:39:53.328771 systemd[1]: Started cri-containerd-e40648a54dfac547d03cef2bcac35dc2ae3cc76b8f8716d928ad622ebaca606e.scope. Feb 8 23:39:53.357765 systemd[1]: Started cri-containerd-453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8.scope. Feb 8 23:39:53.386204 env[1342]: time="2024-02-08T23:39:53.386160772Z" level=info msg="StartContainer for \"e40648a54dfac547d03cef2bcac35dc2ae3cc76b8f8716d928ad622ebaca606e\" returns successfully" Feb 8 23:39:53.404387 env[1342]: time="2024-02-08T23:39:53.404292186Z" level=info msg="StartContainer for \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\" returns successfully" Feb 8 23:39:53.406995 systemd[1]: cri-containerd-453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8.scope: Deactivated successfully. Feb 8 23:39:53.949118 env[1342]: time="2024-02-08T23:39:53.949047752Z" level=info msg="shim disconnected" id=453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8 Feb 8 23:39:53.949118 env[1342]: time="2024-02-08T23:39:53.949119965Z" level=warning msg="cleaning up after shim disconnected" id=453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8 namespace=k8s.io Feb 8 23:39:53.949118 env[1342]: time="2024-02-08T23:39:53.949135968Z" level=info msg="cleaning up dead shim" Feb 8 23:39:53.957110 env[1342]: time="2024-02-08T23:39:53.957070118Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:39:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2257 runtime=io.containerd.runc.v2\n" Feb 8 23:39:54.179034 kubelet[1896]: E0208 23:39:54.178916 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:54.266442 env[1342]: time="2024-02-08T23:39:54.265988664Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:39:54.283349 kubelet[1896]: I0208 23:39:54.283317 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-jmtb9" podStartSLOduration=5.388530908 podCreationTimestamp="2024-02-08 23:39:36 +0000 UTC" firstStartedPulling="2024-02-08 23:39:40.364628364 +0000 UTC m=+5.458356751" lastFinishedPulling="2024-02-08 23:39:53.259359196 +0000 UTC m=+18.353087583" observedRunningTime="2024-02-08 23:39:54.269956071 +0000 UTC m=+19.363684558" watchObservedRunningTime="2024-02-08 23:39:54.28326174 +0000 UTC m=+19.376990127" Feb 8 23:39:54.294530 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3630748372.mount: Deactivated successfully. Feb 8 23:39:54.307444 env[1342]: time="2024-02-08T23:39:54.307405040Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\"" Feb 8 23:39:54.308127 env[1342]: time="2024-02-08T23:39:54.308098364Z" level=info msg="StartContainer for \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\"" Feb 8 23:39:54.333714 systemd[1]: Started cri-containerd-61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206.scope. Feb 8 23:39:54.365961 systemd[1]: cri-containerd-61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206.scope: Deactivated successfully. Feb 8 23:39:54.370278 env[1342]: time="2024-02-08T23:39:54.370234630Z" level=info msg="StartContainer for \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\" returns successfully" Feb 8 23:39:54.402767 env[1342]: time="2024-02-08T23:39:54.402718415Z" level=info msg="shim disconnected" id=61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206 Feb 8 23:39:54.402767 env[1342]: time="2024-02-08T23:39:54.402767124Z" level=warning msg="cleaning up after shim disconnected" id=61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206 namespace=k8s.io Feb 8 23:39:54.403068 env[1342]: time="2024-02-08T23:39:54.402778226Z" level=info msg="cleaning up dead shim" Feb 8 23:39:54.409696 env[1342]: time="2024-02-08T23:39:54.409663352Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:39:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=2409 runtime=io.containerd.runc.v2\n" Feb 8 23:39:55.146443 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206-rootfs.mount: Deactivated successfully. Feb 8 23:39:55.166806 kubelet[1896]: E0208 23:39:55.166775 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:55.180064 kubelet[1896]: E0208 23:39:55.180032 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:55.269570 env[1342]: time="2024-02-08T23:39:55.269528243Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:39:55.309865 env[1342]: time="2024-02-08T23:39:55.309813935Z" level=info msg="CreateContainer within sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\"" Feb 8 23:39:55.310449 env[1342]: time="2024-02-08T23:39:55.310378533Z" level=info msg="StartContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\"" Feb 8 23:39:55.335234 systemd[1]: Started cri-containerd-d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d.scope. Feb 8 23:39:55.373839 env[1342]: time="2024-02-08T23:39:55.373788539Z" level=info msg="StartContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" returns successfully" Feb 8 23:39:55.468185 kubelet[1896]: I0208 23:39:55.468099 1896 kubelet_node_status.go:493] "Fast updating node status as it just became ready" Feb 8 23:39:55.884113 kernel: Initializing XFRM netlink socket Feb 8 23:39:56.181348 kubelet[1896]: E0208 23:39:56.181186 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:56.289689 kubelet[1896]: I0208 23:39:56.289635 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-mgt7w" podStartSLOduration=12.676607401 podCreationTimestamp="2024-02-08 23:39:36 +0000 UTC" firstStartedPulling="2024-02-08 23:39:40.358243209 +0000 UTC m=+5.451971596" lastFinishedPulling="2024-02-08 23:39:47.97118683 +0000 UTC m=+13.064915217" observedRunningTime="2024-02-08 23:39:56.28711631 +0000 UTC m=+21.380844697" watchObservedRunningTime="2024-02-08 23:39:56.289551022 +0000 UTC m=+21.383279609" Feb 8 23:39:57.181647 kubelet[1896]: E0208 23:39:57.181584 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:57.512304 systemd-networkd[1490]: cilium_host: Link UP Feb 8 23:39:57.516539 systemd-networkd[1490]: cilium_net: Link UP Feb 8 23:39:57.520223 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_net: link becomes ready Feb 8 23:39:57.520300 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): cilium_host: link becomes ready Feb 8 23:39:57.520441 systemd-networkd[1490]: cilium_net: Gained carrier Feb 8 23:39:57.520826 systemd-networkd[1490]: cilium_host: Gained carrier Feb 8 23:39:57.647102 systemd-networkd[1490]: cilium_net: Gained IPv6LL Feb 8 23:39:57.695618 systemd-networkd[1490]: cilium_vxlan: Link UP Feb 8 23:39:57.695627 systemd-networkd[1490]: cilium_vxlan: Gained carrier Feb 8 23:39:57.954066 kernel: NET: Registered PF_ALG protocol family Feb 8 23:39:58.071189 systemd-networkd[1490]: cilium_host: Gained IPv6LL Feb 8 23:39:58.181785 kubelet[1896]: E0208 23:39:58.181747 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:58.726906 systemd-networkd[1490]: lxc_health: Link UP Feb 8 23:39:58.735109 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:39:58.735325 systemd-networkd[1490]: lxc_health: Gained carrier Feb 8 23:39:58.973084 kubelet[1896]: I0208 23:39:58.973037 1896 topology_manager.go:215] "Topology Admit Handler" podUID="9f0b2911-9154-4076-9b10-ce7d942dd01c" podNamespace="default" podName="nginx-deployment-6d5f899847-mfxs5" Feb 8 23:39:58.980169 systemd[1]: Created slice kubepods-besteffort-pod9f0b2911_9154_4076_9b10_ce7d942dd01c.slice. Feb 8 23:39:59.040885 kubelet[1896]: I0208 23:39:59.040837 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jtj97\" (UniqueName: \"kubernetes.io/projected/9f0b2911-9154-4076-9b10-ce7d942dd01c-kube-api-access-jtj97\") pod \"nginx-deployment-6d5f899847-mfxs5\" (UID: \"9f0b2911-9154-4076-9b10-ce7d942dd01c\") " pod="default/nginx-deployment-6d5f899847-mfxs5" Feb 8 23:39:59.182440 kubelet[1896]: E0208 23:39:59.182402 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:39:59.286746 env[1342]: time="2024-02-08T23:39:59.285791788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-mfxs5,Uid:9f0b2911-9154-4076-9b10-ce7d942dd01c,Namespace:default,Attempt:0,}" Feb 8 23:39:59.348048 systemd-networkd[1490]: lxc7fdb25e2cb56: Link UP Feb 8 23:39:59.359107 kernel: eth0: renamed from tmpb94f1 Feb 8 23:39:59.374091 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc7fdb25e2cb56: link becomes ready Feb 8 23:39:59.375191 systemd-networkd[1490]: lxc7fdb25e2cb56: Gained carrier Feb 8 23:39:59.479260 systemd-networkd[1490]: cilium_vxlan: Gained IPv6LL Feb 8 23:40:00.183872 kubelet[1896]: E0208 23:40:00.183819 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:00.567298 systemd-networkd[1490]: lxc7fdb25e2cb56: Gained IPv6LL Feb 8 23:40:00.567676 systemd-networkd[1490]: lxc_health: Gained IPv6LL Feb 8 23:40:01.184833 kubelet[1896]: E0208 23:40:01.184787 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:02.151612 kubelet[1896]: I0208 23:40:02.151570 1896 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Feb 8 23:40:02.186255 kubelet[1896]: E0208 23:40:02.186220 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:02.768886 env[1342]: time="2024-02-08T23:40:02.768808895Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:02.769364 env[1342]: time="2024-02-08T23:40:02.768846901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:02.769364 env[1342]: time="2024-02-08T23:40:02.768860503Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:02.769364 env[1342]: time="2024-02-08T23:40:02.769061332Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed pid=2930 runtime=io.containerd.runc.v2 Feb 8 23:40:02.790631 systemd[1]: run-containerd-runc-k8s.io-b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed-runc.OBLiqV.mount: Deactivated successfully. Feb 8 23:40:02.795678 systemd[1]: Started cri-containerd-b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed.scope. Feb 8 23:40:02.834161 env[1342]: time="2024-02-08T23:40:02.834125304Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nginx-deployment-6d5f899847-mfxs5,Uid:9f0b2911-9154-4076-9b10-ce7d942dd01c,Namespace:default,Attempt:0,} returns sandbox id \"b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed\"" Feb 8 23:40:02.835988 env[1342]: time="2024-02-08T23:40:02.835946770Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:40:03.187237 kubelet[1896]: E0208 23:40:03.187152 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:04.188216 kubelet[1896]: E0208 23:40:04.188159 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:05.188726 kubelet[1896]: E0208 23:40:05.188691 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:06.189472 kubelet[1896]: E0208 23:40:06.189429 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:07.065711 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3784880857.mount: Deactivated successfully. Feb 8 23:40:07.190189 kubelet[1896]: E0208 23:40:07.190131 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:08.190939 kubelet[1896]: E0208 23:40:08.190888 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:09.191670 kubelet[1896]: E0208 23:40:09.191618 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:10.192294 kubelet[1896]: E0208 23:40:10.192237 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:11.193440 kubelet[1896]: E0208 23:40:11.193337 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:12.194224 kubelet[1896]: E0208 23:40:12.194172 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:13.195353 kubelet[1896]: E0208 23:40:13.195301 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:14.195510 kubelet[1896]: E0208 23:40:14.195456 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:15.166632 kubelet[1896]: E0208 23:40:15.166583 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:15.195835 kubelet[1896]: E0208 23:40:15.195799 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:16.197092 kubelet[1896]: E0208 23:40:16.197001 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:17.197223 kubelet[1896]: E0208 23:40:17.197178 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:17.600381 env[1342]: time="2024-02-08T23:40:17.600269745Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:17.606095 env[1342]: time="2024-02-08T23:40:17.606056641Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:17.609869 env[1342]: time="2024-02-08T23:40:17.609839230Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:17.612740 env[1342]: time="2024-02-08T23:40:17.612658120Z" level=info msg="ImageCreate event &ImageCreate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:17.613794 env[1342]: time="2024-02-08T23:40:17.613763134Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:40:17.615632 env[1342]: time="2024-02-08T23:40:17.615604624Z" level=info msg="CreateContainer within sandbox \"b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed\" for container &ContainerMetadata{Name:nginx,Attempt:0,}" Feb 8 23:40:17.646340 env[1342]: time="2024-02-08T23:40:17.646303885Z" level=info msg="CreateContainer within sandbox \"b94f122e35952678080e0e4aef393275c9b270385bf743aea6ffd8a266ca80ed\" for &ContainerMetadata{Name:nginx,Attempt:0,} returns container id \"9624f5aae5d34eb27e48c26dafd8694cb7f227864127ae023a7e98547ca4cf29\"" Feb 8 23:40:17.646742 env[1342]: time="2024-02-08T23:40:17.646677824Z" level=info msg="StartContainer for \"9624f5aae5d34eb27e48c26dafd8694cb7f227864127ae023a7e98547ca4cf29\"" Feb 8 23:40:17.669610 systemd[1]: run-containerd-runc-k8s.io-9624f5aae5d34eb27e48c26dafd8694cb7f227864127ae023a7e98547ca4cf29-runc.EDU3Eg.mount: Deactivated successfully. Feb 8 23:40:17.671174 systemd[1]: Started cri-containerd-9624f5aae5d34eb27e48c26dafd8694cb7f227864127ae023a7e98547ca4cf29.scope. Feb 8 23:40:17.702998 env[1342]: time="2024-02-08T23:40:17.702885112Z" level=info msg="StartContainer for \"9624f5aae5d34eb27e48c26dafd8694cb7f227864127ae023a7e98547ca4cf29\" returns successfully" Feb 8 23:40:18.198238 kubelet[1896]: E0208 23:40:18.198182 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:18.327540 kubelet[1896]: I0208 23:40:18.327258 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nginx-deployment-6d5f899847-mfxs5" podStartSLOduration=5.548466607 podCreationTimestamp="2024-02-08 23:39:58 +0000 UTC" firstStartedPulling="2024-02-08 23:40:02.835309677 +0000 UTC m=+27.929038064" lastFinishedPulling="2024-02-08 23:40:17.614062165 +0000 UTC m=+42.707790652" observedRunningTime="2024-02-08 23:40:18.327199993 +0000 UTC m=+43.420928380" watchObservedRunningTime="2024-02-08 23:40:18.327219195 +0000 UTC m=+43.420947582" Feb 8 23:40:19.199078 kubelet[1896]: E0208 23:40:19.199033 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:20.199820 kubelet[1896]: E0208 23:40:20.199760 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:21.200755 kubelet[1896]: E0208 23:40:21.200696 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:22.201411 kubelet[1896]: E0208 23:40:22.201350 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:22.386187 kubelet[1896]: I0208 23:40:22.386147 1896 topology_manager.go:215] "Topology Admit Handler" podUID="9279f559-26fb-4fe7-a613-038666da8b11" podNamespace="default" podName="nfs-server-provisioner-0" Feb 8 23:40:22.390966 systemd[1]: Created slice kubepods-besteffort-pod9279f559_26fb_4fe7_a613_038666da8b11.slice. Feb 8 23:40:22.479894 kubelet[1896]: I0208 23:40:22.479750 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data\" (UniqueName: \"kubernetes.io/empty-dir/9279f559-26fb-4fe7-a613-038666da8b11-data\") pod \"nfs-server-provisioner-0\" (UID: \"9279f559-26fb-4fe7-a613-038666da8b11\") " pod="default/nfs-server-provisioner-0" Feb 8 23:40:22.479894 kubelet[1896]: I0208 23:40:22.479818 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sblhv\" (UniqueName: \"kubernetes.io/projected/9279f559-26fb-4fe7-a613-038666da8b11-kube-api-access-sblhv\") pod \"nfs-server-provisioner-0\" (UID: \"9279f559-26fb-4fe7-a613-038666da8b11\") " pod="default/nfs-server-provisioner-0" Feb 8 23:40:22.694248 env[1342]: time="2024-02-08T23:40:22.694187475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9279f559-26fb-4fe7-a613-038666da8b11,Namespace:default,Attempt:0,}" Feb 8 23:40:22.751989 systemd-networkd[1490]: lxce4d5c24233f8: Link UP Feb 8 23:40:22.759028 kernel: eth0: renamed from tmp75de3 Feb 8 23:40:22.770473 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:22.770551 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxce4d5c24233f8: link becomes ready Feb 8 23:40:22.770583 systemd-networkd[1490]: lxce4d5c24233f8: Gained carrier Feb 8 23:40:22.952647 env[1342]: time="2024-02-08T23:40:22.952569535Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:22.953028 env[1342]: time="2024-02-08T23:40:22.952618839Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:22.953028 env[1342]: time="2024-02-08T23:40:22.952632141Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:22.953028 env[1342]: time="2024-02-08T23:40:22.952768353Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/75de3e3495dc17d9d1dbdf867722559f121be4558cdb3f165b3c12b900672443 pid=3056 runtime=io.containerd.runc.v2 Feb 8 23:40:22.974850 systemd[1]: Started cri-containerd-75de3e3495dc17d9d1dbdf867722559f121be4558cdb3f165b3c12b900672443.scope. Feb 8 23:40:23.013303 env[1342]: time="2024-02-08T23:40:23.012708990Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:nfs-server-provisioner-0,Uid:9279f559-26fb-4fe7-a613-038666da8b11,Namespace:default,Attempt:0,} returns sandbox id \"75de3e3495dc17d9d1dbdf867722559f121be4558cdb3f165b3c12b900672443\"" Feb 8 23:40:23.014436 env[1342]: time="2024-02-08T23:40:23.014405244Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\"" Feb 8 23:40:23.202229 kubelet[1896]: E0208 23:40:23.202180 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:24.202923 kubelet[1896]: E0208 23:40:24.202864 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:24.567304 systemd-networkd[1490]: lxce4d5c24233f8: Gained IPv6LL Feb 8 23:40:25.203916 kubelet[1896]: E0208 23:40:25.203861 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:25.613361 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2906956143.mount: Deactivated successfully. Feb 8 23:40:26.204815 kubelet[1896]: E0208 23:40:26.204772 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:27.205333 kubelet[1896]: E0208 23:40:27.205204 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:27.539353 env[1342]: time="2024-02-08T23:40:27.538962972Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:27.545480 env[1342]: time="2024-02-08T23:40:27.545438316Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:27.551195 env[1342]: time="2024-02-08T23:40:27.551156496Z" level=info msg="ImageUpdate event &ImageUpdate{Name:registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:27.555794 env[1342]: time="2024-02-08T23:40:27.555761483Z" level=info msg="ImageCreate event &ImageCreate{Name:registry.k8s.io/sig-storage/nfs-provisioner@sha256:c825f3d5e28bde099bd7a3daace28772d412c9157ad47fa752a9ad0baafc118d,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:27.556362 env[1342]: time="2024-02-08T23:40:27.556331630Z" level=info msg="PullImage \"registry.k8s.io/sig-storage/nfs-provisioner:v4.0.8\" returns image reference \"sha256:fd0b16f70b66b72bcb2f91d556fa33eba02729c44ffc5f2c16130e7f9fbed3c4\"" Feb 8 23:40:27.558734 env[1342]: time="2024-02-08T23:40:27.558703930Z" level=info msg="CreateContainer within sandbox \"75de3e3495dc17d9d1dbdf867722559f121be4558cdb3f165b3c12b900672443\" for container &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,}" Feb 8 23:40:27.580992 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount510522781.mount: Deactivated successfully. Feb 8 23:40:27.587977 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1322057081.mount: Deactivated successfully. Feb 8 23:40:27.598718 env[1342]: time="2024-02-08T23:40:27.598680087Z" level=info msg="CreateContainer within sandbox \"75de3e3495dc17d9d1dbdf867722559f121be4558cdb3f165b3c12b900672443\" for &ContainerMetadata{Name:nfs-server-provisioner,Attempt:0,} returns container id \"45240ed5aec1741c1d455ada3d88c59cb099815d785c63dd1976b06bd75ecb79\"" Feb 8 23:40:27.599216 env[1342]: time="2024-02-08T23:40:27.599182529Z" level=info msg="StartContainer for \"45240ed5aec1741c1d455ada3d88c59cb099815d785c63dd1976b06bd75ecb79\"" Feb 8 23:40:27.618461 systemd[1]: Started cri-containerd-45240ed5aec1741c1d455ada3d88c59cb099815d785c63dd1976b06bd75ecb79.scope. Feb 8 23:40:27.648677 env[1342]: time="2024-02-08T23:40:27.648631882Z" level=info msg="StartContainer for \"45240ed5aec1741c1d455ada3d88c59cb099815d785c63dd1976b06bd75ecb79\" returns successfully" Feb 8 23:40:28.206382 kubelet[1896]: E0208 23:40:28.206330 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:28.350147 kubelet[1896]: I0208 23:40:28.350068 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/nfs-server-provisioner-0" podStartSLOduration=1.80723507 podCreationTimestamp="2024-02-08 23:40:22 +0000 UTC" firstStartedPulling="2024-02-08 23:40:23.013907499 +0000 UTC m=+48.107635886" lastFinishedPulling="2024-02-08 23:40:27.556697361 +0000 UTC m=+52.650425848" observedRunningTime="2024-02-08 23:40:28.349932424 +0000 UTC m=+53.443660911" watchObservedRunningTime="2024-02-08 23:40:28.350025032 +0000 UTC m=+53.443753419" Feb 8 23:40:29.206526 kubelet[1896]: E0208 23:40:29.206460 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:30.207087 kubelet[1896]: E0208 23:40:30.206994 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:31.208237 kubelet[1896]: E0208 23:40:31.208180 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:32.208897 kubelet[1896]: E0208 23:40:32.208843 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:33.209373 kubelet[1896]: E0208 23:40:33.209273 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:34.209980 kubelet[1896]: E0208 23:40:34.209915 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:35.166965 kubelet[1896]: E0208 23:40:35.166913 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:35.210618 kubelet[1896]: E0208 23:40:35.210565 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:36.211111 kubelet[1896]: E0208 23:40:36.211069 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:37.212170 kubelet[1896]: E0208 23:40:37.212109 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:37.430556 kubelet[1896]: I0208 23:40:37.430503 1896 topology_manager.go:215] "Topology Admit Handler" podUID="51df3f6d-7ea9-4890-ba68-9d42a92c503d" podNamespace="default" podName="test-pod-1" Feb 8 23:40:37.436319 systemd[1]: Created slice kubepods-besteffort-pod51df3f6d_7ea9_4890_ba68_9d42a92c503d.slice. Feb 8 23:40:37.571499 kubelet[1896]: I0208 23:40:37.571464 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-747733e8-6c39-4440-abe0-d9344a39e5a3\" (UniqueName: \"kubernetes.io/nfs/51df3f6d-7ea9-4890-ba68-9d42a92c503d-pvc-747733e8-6c39-4440-abe0-d9344a39e5a3\") pod \"test-pod-1\" (UID: \"51df3f6d-7ea9-4890-ba68-9d42a92c503d\") " pod="default/test-pod-1" Feb 8 23:40:37.571698 kubelet[1896]: I0208 23:40:37.571533 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gf9ll\" (UniqueName: \"kubernetes.io/projected/51df3f6d-7ea9-4890-ba68-9d42a92c503d-kube-api-access-gf9ll\") pod \"test-pod-1\" (UID: \"51df3f6d-7ea9-4890-ba68-9d42a92c503d\") " pod="default/test-pod-1" Feb 8 23:40:37.787092 kernel: FS-Cache: Loaded Feb 8 23:40:37.913121 kernel: RPC: Registered named UNIX socket transport module. Feb 8 23:40:37.913259 kernel: RPC: Registered udp transport module. Feb 8 23:40:37.913288 kernel: RPC: Registered tcp transport module. Feb 8 23:40:37.915402 kernel: RPC: Registered tcp NFSv4.1 backchannel transport module. Feb 8 23:40:38.095027 kernel: FS-Cache: Netfs 'nfs' registered for caching Feb 8 23:40:38.212761 kubelet[1896]: E0208 23:40:38.212634 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:38.309828 kernel: NFS: Registering the id_resolver key type Feb 8 23:40:38.310023 kernel: Key type id_resolver registered Feb 8 23:40:38.310056 kernel: Key type id_legacy registered Feb 8 23:40:38.619280 nfsidmap[3171]: nss_getpwnam: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-3441531bae' Feb 8 23:40:38.689933 nfsidmap[3172]: nss_name_to_gid: name 'root@nfs-server-provisioner.default.svc.cluster.local' does not map into domain '3.2-a-3441531bae' Feb 8 23:40:38.940774 env[1342]: time="2024-02-08T23:40:38.940468491Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:51df3f6d-7ea9-4890-ba68-9d42a92c503d,Namespace:default,Attempt:0,}" Feb 8 23:40:39.000876 systemd-networkd[1490]: lxcca8a6bb33062: Link UP Feb 8 23:40:39.011049 kernel: eth0: renamed from tmpb41e7 Feb 8 23:40:39.026099 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Feb 8 23:40:39.026168 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxcca8a6bb33062: link becomes ready Feb 8 23:40:39.026670 systemd-networkd[1490]: lxcca8a6bb33062: Gained carrier Feb 8 23:40:39.212979 kubelet[1896]: E0208 23:40:39.212860 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:39.273673 env[1342]: time="2024-02-08T23:40:39.273599769Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:39.273673 env[1342]: time="2024-02-08T23:40:39.273635171Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:39.273673 env[1342]: time="2024-02-08T23:40:39.273649172Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:39.274087 env[1342]: time="2024-02-08T23:40:39.274026098Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/b41e726124b89d5301e5b8fe2b2e6f8f040827457de65a2428bd59c993209e3e pid=3199 runtime=io.containerd.runc.v2 Feb 8 23:40:39.287639 systemd[1]: Started cri-containerd-b41e726124b89d5301e5b8fe2b2e6f8f040827457de65a2428bd59c993209e3e.scope. Feb 8 23:40:39.328857 env[1342]: time="2024-02-08T23:40:39.328814617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:test-pod-1,Uid:51df3f6d-7ea9-4890-ba68-9d42a92c503d,Namespace:default,Attempt:0,} returns sandbox id \"b41e726124b89d5301e5b8fe2b2e6f8f040827457de65a2428bd59c993209e3e\"" Feb 8 23:40:39.330432 env[1342]: time="2024-02-08T23:40:39.330399924Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\"" Feb 8 23:40:39.924409 env[1342]: time="2024-02-08T23:40:39.924360637Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:39.929979 env[1342]: time="2024-02-08T23:40:39.929686899Z" level=info msg="ImageUpdate event &ImageUpdate{Name:sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:39.933165 env[1342]: time="2024-02-08T23:40:39.933120032Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx:latest,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:39.938157 env[1342]: time="2024-02-08T23:40:39.937891156Z" level=info msg="ImageUpdate event &ImageUpdate{Name:ghcr.io/flatcar/nginx@sha256:e34a272f01984c973b1e034e197c02f77dda18981038e3a54e957554ada4fec6,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:39.938418 env[1342]: time="2024-02-08T23:40:39.938390189Z" level=info msg="PullImage \"ghcr.io/flatcar/nginx:latest\" returns image reference \"sha256:3a8963c304a2f89d2bfa055e07403bae348b293c891b8ea01f7136642eaa277a\"" Feb 8 23:40:39.940568 env[1342]: time="2024-02-08T23:40:39.940538635Z" level=info msg="CreateContainer within sandbox \"b41e726124b89d5301e5b8fe2b2e6f8f040827457de65a2428bd59c993209e3e\" for container &ContainerMetadata{Name:test,Attempt:0,}" Feb 8 23:40:39.972787 env[1342]: time="2024-02-08T23:40:39.972749521Z" level=info msg="CreateContainer within sandbox \"b41e726124b89d5301e5b8fe2b2e6f8f040827457de65a2428bd59c993209e3e\" for &ContainerMetadata{Name:test,Attempt:0,} returns container id \"22344753de72729d704e1e6c3c0c14c15d74f6ab4120c4d48204a273f234b493\"" Feb 8 23:40:39.973557 env[1342]: time="2024-02-08T23:40:39.973523174Z" level=info msg="StartContainer for \"22344753de72729d704e1e6c3c0c14c15d74f6ab4120c4d48204a273f234b493\"" Feb 8 23:40:39.993211 systemd[1]: Started cri-containerd-22344753de72729d704e1e6c3c0c14c15d74f6ab4120c4d48204a273f234b493.scope. Feb 8 23:40:40.025357 env[1342]: time="2024-02-08T23:40:40.025315863Z" level=info msg="StartContainer for \"22344753de72729d704e1e6c3c0c14c15d74f6ab4120c4d48204a273f234b493\" returns successfully" Feb 8 23:40:40.213282 kubelet[1896]: E0208 23:40:40.213105 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:40.695593 systemd-networkd[1490]: lxcca8a6bb33062: Gained IPv6LL Feb 8 23:40:41.214251 kubelet[1896]: E0208 23:40:41.214146 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:42.214578 kubelet[1896]: E0208 23:40:42.214524 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:43.214704 kubelet[1896]: E0208 23:40:43.214650 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:44.215623 kubelet[1896]: E0208 23:40:44.215567 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:45.216071 kubelet[1896]: E0208 23:40:45.216033 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:46.180370 kubelet[1896]: I0208 23:40:46.180319 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="default/test-pod-1" podStartSLOduration=22.571158955 podCreationTimestamp="2024-02-08 23:40:23 +0000 UTC" firstStartedPulling="2024-02-08 23:40:39.329937193 +0000 UTC m=+64.423665680" lastFinishedPulling="2024-02-08 23:40:39.939059735 +0000 UTC m=+65.032788122" observedRunningTime="2024-02-08 23:40:40.376886643 +0000 UTC m=+65.470615030" watchObservedRunningTime="2024-02-08 23:40:46.180281397 +0000 UTC m=+71.274009884" Feb 8 23:40:46.201128 systemd[1]: run-containerd-runc-k8s.io-d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d-runc.OpygTP.mount: Deactivated successfully. Feb 8 23:40:46.215917 env[1342]: time="2024-02-08T23:40:46.215848465Z" level=error msg="failed to reload cni configuration after receiving fs change event(\"/etc/cni/net.d/05-cilium.conf\": REMOVE)" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Feb 8 23:40:46.216915 kubelet[1896]: E0208 23:40:46.216882 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:46.221385 env[1342]: time="2024-02-08T23:40:46.221356801Z" level=info msg="StopContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" with timeout 2 (s)" Feb 8 23:40:46.221701 env[1342]: time="2024-02-08T23:40:46.221670720Z" level=info msg="Stop container \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" with signal terminated" Feb 8 23:40:46.229482 systemd-networkd[1490]: lxc_health: Link DOWN Feb 8 23:40:46.229490 systemd-networkd[1490]: lxc_health: Lost carrier Feb 8 23:40:46.250804 systemd[1]: cri-containerd-d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d.scope: Deactivated successfully. Feb 8 23:40:46.251120 systemd[1]: cri-containerd-d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d.scope: Consumed 6.590s CPU time. Feb 8 23:40:46.268907 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d-rootfs.mount: Deactivated successfully. Feb 8 23:40:47.217660 kubelet[1896]: E0208 23:40:47.217615 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:48.218830 kubelet[1896]: E0208 23:40:48.218730 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:48.231635 env[1342]: time="2024-02-08T23:40:48.231543901Z" level=info msg="Kill container \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\"" Feb 8 23:40:48.786229 env[1342]: time="2024-02-08T23:40:48.786164355Z" level=info msg="shim disconnected" id=d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d Feb 8 23:40:48.786229 env[1342]: time="2024-02-08T23:40:48.786225959Z" level=warning msg="cleaning up after shim disconnected" id=d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d namespace=k8s.io Feb 8 23:40:48.786518 env[1342]: time="2024-02-08T23:40:48.786240660Z" level=info msg="cleaning up dead shim" Feb 8 23:40:48.795908 env[1342]: time="2024-02-08T23:40:48.795851329Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3328 runtime=io.containerd.runc.v2\n" Feb 8 23:40:48.801831 env[1342]: time="2024-02-08T23:40:48.801790481Z" level=info msg="StopContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" returns successfully" Feb 8 23:40:48.802486 env[1342]: time="2024-02-08T23:40:48.802456420Z" level=info msg="StopPodSandbox for \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\"" Feb 8 23:40:48.802592 env[1342]: time="2024-02-08T23:40:48.802516424Z" level=info msg="Container to stop \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:48.802592 env[1342]: time="2024-02-08T23:40:48.802537825Z" level=info msg="Container to stop \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:48.802592 env[1342]: time="2024-02-08T23:40:48.802554926Z" level=info msg="Container to stop \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:48.802592 env[1342]: time="2024-02-08T23:40:48.802570927Z" level=info msg="Container to stop \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:48.802592 env[1342]: time="2024-02-08T23:40:48.802585328Z" level=info msg="Container to stop \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Feb 8 23:40:48.805676 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff-shm.mount: Deactivated successfully. Feb 8 23:40:48.811433 systemd[1]: cri-containerd-848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff.scope: Deactivated successfully. Feb 8 23:40:48.830690 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff-rootfs.mount: Deactivated successfully. Feb 8 23:40:48.839964 env[1342]: time="2024-02-08T23:40:48.839922240Z" level=info msg="shim disconnected" id=848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff Feb 8 23:40:48.840105 env[1342]: time="2024-02-08T23:40:48.839970442Z" level=warning msg="cleaning up after shim disconnected" id=848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff namespace=k8s.io Feb 8 23:40:48.840105 env[1342]: time="2024-02-08T23:40:48.839982343Z" level=info msg="cleaning up dead shim" Feb 8 23:40:48.847239 env[1342]: time="2024-02-08T23:40:48.847209371Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:48Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3360 runtime=io.containerd.runc.v2\n" Feb 8 23:40:48.847504 env[1342]: time="2024-02-08T23:40:48.847477087Z" level=info msg="TearDown network for sandbox \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" successfully" Feb 8 23:40:48.847587 env[1342]: time="2024-02-08T23:40:48.847505489Z" level=info msg="StopPodSandbox for \"848c65358597c58c318b803bd2bb67a66a7651c295e99c426dad8495267c47ff\" returns successfully" Feb 8 23:40:48.949788 kubelet[1896]: I0208 23:40:48.949711 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-config-path\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950297 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-bpf-maps\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950341 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-etc-cni-netd\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950375 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-kernel\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950406 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hostproc\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950441 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-lib-modules\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951040 kubelet[1896]: I0208 23:40:48.950471 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-xtables-lock\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950510 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jdg9k\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-kube-api-access-jdg9k\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950556 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-net\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950594 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-run\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950633 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5958cc2-b53c-4a0c-8987-0bda6989b50c-clustermesh-secrets\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950672 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hubble-tls\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951437 kubelet[1896]: I0208 23:40:48.950707 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-cgroup\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951763 kubelet[1896]: I0208 23:40:48.950739 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cni-path\") pod \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\" (UID: \"d5958cc2-b53c-4a0c-8987-0bda6989b50c\") " Feb 8 23:40:48.951763 kubelet[1896]: I0208 23:40:48.950796 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cni-path" (OuterVolumeSpecName: "cni-path") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.951763 kubelet[1896]: I0208 23:40:48.950847 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.951763 kubelet[1896]: I0208 23:40:48.950874 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.951763 kubelet[1896]: I0208 23:40:48.950900 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.952069 kubelet[1896]: I0208 23:40:48.950927 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hostproc" (OuterVolumeSpecName: "hostproc") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.952069 kubelet[1896]: I0208 23:40:48.950949 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.952069 kubelet[1896]: I0208 23:40:48.950974 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.952975 kubelet[1896]: I0208 23:40:48.952945 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.953467 kubelet[1896]: I0208 23:40:48.953103 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.954468 kubelet[1896]: I0208 23:40:48.954434 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:40:48.954655 kubelet[1896]: I0208 23:40:48.954631 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:48.958097 kubelet[1896]: I0208 23:40:48.957976 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-kube-api-access-jdg9k" (OuterVolumeSpecName: "kube-api-access-jdg9k") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "kube-api-access-jdg9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:48.961796 kubelet[1896]: I0208 23:40:48.961770 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5958cc2-b53c-4a0c-8987-0bda6989b50c-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:48.962105 kubelet[1896]: I0208 23:40:48.962084 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "d5958cc2-b53c-4a0c-8987-0bda6989b50c" (UID: "d5958cc2-b53c-4a0c-8987-0bda6989b50c"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:48.962337 systemd[1]: var-lib-kubelet-pods-d5958cc2\x2db53c\x2d4a0c\x2d8987\x2d0bda6989b50c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djdg9k.mount: Deactivated successfully. Feb 8 23:40:48.964681 systemd[1]: var-lib-kubelet-pods-d5958cc2\x2db53c\x2d4a0c\x2d8987\x2d0bda6989b50c-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:48.964791 systemd[1]: var-lib-kubelet-pods-d5958cc2\x2db53c\x2d4a0c\x2d8987\x2d0bda6989b50c-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051399 1896 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-jdg9k\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-kube-api-access-jdg9k\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051441 1896 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-bpf-maps\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051458 1896 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-etc-cni-netd\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051473 1896 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-kernel\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051489 1896 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hostproc\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051505 1896 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-lib-modules\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051521 1896 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-xtables-lock\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.051692 kubelet[1896]: I0208 23:40:49.051536 1896 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-host-proc-sys-net\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051555 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-run\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051575 1896 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/d5958cc2-b53c-4a0c-8987-0bda6989b50c-clustermesh-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051589 1896 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/d5958cc2-b53c-4a0c-8987-0bda6989b50c-hubble-tls\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051606 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-cgroup\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051620 1896 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cni-path\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.052255 kubelet[1896]: I0208 23:40:49.051640 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/d5958cc2-b53c-4a0c-8987-0bda6989b50c-cilium-config-path\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:49.219127 kubelet[1896]: E0208 23:40:49.219090 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:49.223116 systemd[1]: Removed slice kubepods-burstable-podd5958cc2_b53c_4a0c_8987_0bda6989b50c.slice. Feb 8 23:40:49.223264 systemd[1]: kubepods-burstable-podd5958cc2_b53c_4a0c_8987_0bda6989b50c.slice: Consumed 6.705s CPU time. Feb 8 23:40:49.390149 kubelet[1896]: I0208 23:40:49.390121 1896 scope.go:117] "RemoveContainer" containerID="d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d" Feb 8 23:40:49.392454 env[1342]: time="2024-02-08T23:40:49.392376347Z" level=info msg="RemoveContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\"" Feb 8 23:40:49.408323 env[1342]: time="2024-02-08T23:40:49.408288177Z" level=info msg="RemoveContainer for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" returns successfully" Feb 8 23:40:49.408503 kubelet[1896]: I0208 23:40:49.408481 1896 scope.go:117] "RemoveContainer" containerID="61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206" Feb 8 23:40:49.410084 env[1342]: time="2024-02-08T23:40:49.410053380Z" level=info msg="RemoveContainer for \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\"" Feb 8 23:40:49.415959 env[1342]: time="2024-02-08T23:40:49.415926623Z" level=info msg="RemoveContainer for \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\" returns successfully" Feb 8 23:40:49.416159 kubelet[1896]: I0208 23:40:49.416130 1896 scope.go:117] "RemoveContainer" containerID="453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8" Feb 8 23:40:49.417029 env[1342]: time="2024-02-08T23:40:49.416991185Z" level=info msg="RemoveContainer for \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\"" Feb 8 23:40:49.423479 env[1342]: time="2024-02-08T23:40:49.423449462Z" level=info msg="RemoveContainer for \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\" returns successfully" Feb 8 23:40:49.423597 kubelet[1896]: I0208 23:40:49.423583 1896 scope.go:117] "RemoveContainer" containerID="271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137" Feb 8 23:40:49.424553 env[1342]: time="2024-02-08T23:40:49.424524525Z" level=info msg="RemoveContainer for \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\"" Feb 8 23:40:49.430980 env[1342]: time="2024-02-08T23:40:49.430949301Z" level=info msg="RemoveContainer for \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\" returns successfully" Feb 8 23:40:49.431120 kubelet[1896]: I0208 23:40:49.431101 1896 scope.go:117] "RemoveContainer" containerID="564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6" Feb 8 23:40:49.431993 env[1342]: time="2024-02-08T23:40:49.431967960Z" level=info msg="RemoveContainer for \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\"" Feb 8 23:40:49.437816 env[1342]: time="2024-02-08T23:40:49.437780700Z" level=info msg="RemoveContainer for \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\" returns successfully" Feb 8 23:40:49.437979 kubelet[1896]: I0208 23:40:49.437960 1896 scope.go:117] "RemoveContainer" containerID="d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d" Feb 8 23:40:49.438270 env[1342]: time="2024-02-08T23:40:49.438178823Z" level=error msg="ContainerStatus for \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\": not found" Feb 8 23:40:49.438448 kubelet[1896]: E0208 23:40:49.438429 1896 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\": not found" containerID="d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d" Feb 8 23:40:49.438559 kubelet[1896]: I0208 23:40:49.438544 1896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d"} err="failed to get container status \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\": rpc error: code = NotFound desc = an error occurred when try to find container \"d351b8166780a80860fef0bac3e632281275c18693d11994e95cf97bba6b507d\": not found" Feb 8 23:40:49.438632 kubelet[1896]: I0208 23:40:49.438564 1896 scope.go:117] "RemoveContainer" containerID="61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206" Feb 8 23:40:49.438782 env[1342]: time="2024-02-08T23:40:49.438730255Z" level=error msg="ContainerStatus for \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\": not found" Feb 8 23:40:49.438897 kubelet[1896]: E0208 23:40:49.438878 1896 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\": not found" containerID="61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206" Feb 8 23:40:49.438976 kubelet[1896]: I0208 23:40:49.438914 1896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206"} err="failed to get container status \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\": rpc error: code = NotFound desc = an error occurred when try to find container \"61fe5fad6efe08bc1de72e05bafb30bb6ca88d6d3fbdfeec97d1beded5219206\": not found" Feb 8 23:40:49.438976 kubelet[1896]: I0208 23:40:49.438932 1896 scope.go:117] "RemoveContainer" containerID="453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8" Feb 8 23:40:49.439234 env[1342]: time="2024-02-08T23:40:49.439189482Z" level=error msg="ContainerStatus for \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\": not found" Feb 8 23:40:49.439413 kubelet[1896]: E0208 23:40:49.439395 1896 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\": not found" containerID="453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8" Feb 8 23:40:49.439488 kubelet[1896]: I0208 23:40:49.439445 1896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8"} err="failed to get container status \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\": rpc error: code = NotFound desc = an error occurred when try to find container \"453d2fb77d3c47f3e88dd5084c22ab02c6fa51afb2563746b35778ace2498dc8\": not found" Feb 8 23:40:49.439488 kubelet[1896]: I0208 23:40:49.439460 1896 scope.go:117] "RemoveContainer" containerID="271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137" Feb 8 23:40:49.439722 env[1342]: time="2024-02-08T23:40:49.439673010Z" level=error msg="ContainerStatus for \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\": not found" Feb 8 23:40:49.439895 kubelet[1896]: E0208 23:40:49.439879 1896 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\": not found" containerID="271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137" Feb 8 23:40:49.439976 kubelet[1896]: I0208 23:40:49.439909 1896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137"} err="failed to get container status \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\": rpc error: code = NotFound desc = an error occurred when try to find container \"271121e156f1251a77ded212d4e91c60d81513a37029a546e378c9fdad162137\": not found" Feb 8 23:40:49.439976 kubelet[1896]: I0208 23:40:49.439922 1896 scope.go:117] "RemoveContainer" containerID="564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6" Feb 8 23:40:49.440160 env[1342]: time="2024-02-08T23:40:49.440104535Z" level=error msg="ContainerStatus for \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\": not found" Feb 8 23:40:49.440287 kubelet[1896]: E0208 23:40:49.440268 1896 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\": not found" containerID="564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6" Feb 8 23:40:49.440366 kubelet[1896]: I0208 23:40:49.440308 1896 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6"} err="failed to get container status \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\": rpc error: code = NotFound desc = an error occurred when try to find container \"564f3e7d526603fa4bea3f6c7a8483597068480f6f9703df74eb2f4f8a93f8b6\": not found" Feb 8 23:40:49.800826 kubelet[1896]: I0208 23:40:49.800019 1896 topology_manager.go:215] "Topology Admit Handler" podUID="586aa1ae-dc58-46ff-895a-a2ea1cab1d19" podNamespace="kube-system" podName="cilium-operator-6bc8ccdb58-bfk7h" Feb 8 23:40:49.800826 kubelet[1896]: E0208 23:40:49.800075 1896 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="apply-sysctl-overwrites" Feb 8 23:40:49.800826 kubelet[1896]: E0208 23:40:49.800088 1896 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="cilium-agent" Feb 8 23:40:49.800826 kubelet[1896]: E0208 23:40:49.800098 1896 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="mount-cgroup" Feb 8 23:40:49.800826 kubelet[1896]: E0208 23:40:49.800106 1896 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="mount-bpf-fs" Feb 8 23:40:49.800826 kubelet[1896]: E0208 23:40:49.800115 1896 cpu_manager.go:395] "RemoveStaleState: removing container" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="clean-cilium-state" Feb 8 23:40:49.800826 kubelet[1896]: I0208 23:40:49.800138 1896 memory_manager.go:346] "RemoveStaleState removing state" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" containerName="cilium-agent" Feb 8 23:40:49.805827 systemd[1]: Created slice kubepods-besteffort-pod586aa1ae_dc58_46ff_895a_a2ea1cab1d19.slice. Feb 8 23:40:49.828423 kubelet[1896]: I0208 23:40:49.828397 1896 topology_manager.go:215] "Topology Admit Handler" podUID="751817cc-8367-4ff0-bab7-70f13afa8190" podNamespace="kube-system" podName="cilium-8f6xx" Feb 8 23:40:49.833028 systemd[1]: Created slice kubepods-burstable-pod751817cc_8367_4ff0_bab7_70f13afa8190.slice. Feb 8 23:40:49.836911 kubelet[1896]: W0208 23:40:49.836839 1896 reflector.go:535] object-"kube-system"/"cilium-clustermesh": failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.836911 kubelet[1896]: E0208 23:40:49.836876 1896 reflector.go:147] object-"kube-system"/"cilium-clustermesh": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-clustermesh" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.836911 kubelet[1896]: W0208 23:40:49.836847 1896 reflector.go:535] object-"kube-system"/"cilium-ipsec-keys": failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.836911 kubelet[1896]: E0208 23:40:49.836892 1896 reflector.go:147] object-"kube-system"/"cilium-ipsec-keys": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "cilium-ipsec-keys" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.837400 kubelet[1896]: W0208 23:40:49.837373 1896 reflector.go:535] object-"kube-system"/"hubble-server-certs": failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.837491 kubelet[1896]: E0208 23:40:49.837396 1896 reflector.go:147] object-"kube-system"/"hubble-server-certs": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets "hubble-server-certs" is forbidden: User "system:node:10.200.8.12" cannot list resource "secrets" in API group "" in the namespace "kube-system": no relationship found between node '10.200.8.12' and this object Feb 8 23:40:49.956453 kubelet[1896]: I0208 23:40:49.956390 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-xtables-lock\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956453 kubelet[1896]: I0208 23:40:49.956457 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-net\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956733 kubelet[1896]: I0208 23:40:49.956493 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-bpf-maps\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956733 kubelet[1896]: I0208 23:40:49.956521 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-hostproc\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956733 kubelet[1896]: I0208 23:40:49.956557 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cni-path\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956733 kubelet[1896]: I0208 23:40:49.956588 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-etc-cni-netd\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.956733 kubelet[1896]: I0208 23:40:49.956623 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrzgc\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-kube-api-access-lrzgc\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957044 kubelet[1896]: I0208 23:40:49.956657 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/586aa1ae-dc58-46ff-895a-a2ea1cab1d19-cilium-config-path\") pod \"cilium-operator-6bc8ccdb58-bfk7h\" (UID: \"586aa1ae-dc58-46ff-895a-a2ea1cab1d19\") " pod="kube-system/cilium-operator-6bc8ccdb58-bfk7h" Feb 8 23:40:49.957044 kubelet[1896]: I0208 23:40:49.956689 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-config-path\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957044 kubelet[1896]: I0208 23:40:49.956724 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-kernel\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957044 kubelet[1896]: I0208 23:40:49.956760 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-clustermesh-secrets\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957044 kubelet[1896]: I0208 23:40:49.956800 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5lvxg\" (UniqueName: \"kubernetes.io/projected/586aa1ae-dc58-46ff-895a-a2ea1cab1d19-kube-api-access-5lvxg\") pod \"cilium-operator-6bc8ccdb58-bfk7h\" (UID: \"586aa1ae-dc58-46ff-895a-a2ea1cab1d19\") " pod="kube-system/cilium-operator-6bc8ccdb58-bfk7h" Feb 8 23:40:49.957331 kubelet[1896]: I0208 23:40:49.956838 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-ipsec-secrets\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957331 kubelet[1896]: I0208 23:40:49.956873 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957331 kubelet[1896]: I0208 23:40:49.956907 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-run\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957331 kubelet[1896]: I0208 23:40:49.956943 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-cgroup\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:49.957331 kubelet[1896]: I0208 23:40:49.956980 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-lib-modules\") pod \"cilium-8f6xx\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " pod="kube-system/cilium-8f6xx" Feb 8 23:40:50.219954 kubelet[1896]: E0208 23:40:50.219904 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:50.266077 kubelet[1896]: E0208 23:40:50.266046 1896 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:40:50.409917 env[1342]: time="2024-02-08T23:40:50.409865778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bfk7h,Uid:586aa1ae-dc58-46ff-895a-a2ea1cab1d19,Namespace:kube-system,Attempt:0,}" Feb 8 23:40:50.442783 env[1342]: time="2024-02-08T23:40:50.442714971Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:50.442952 env[1342]: time="2024-02-08T23:40:50.442752074Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:50.442952 env[1342]: time="2024-02-08T23:40:50.442765974Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:50.443102 env[1342]: time="2024-02-08T23:40:50.442983687Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c pid=3386 runtime=io.containerd.runc.v2 Feb 8 23:40:50.463252 systemd[1]: Started cri-containerd-03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c.scope. Feb 8 23:40:50.503792 env[1342]: time="2024-02-08T23:40:50.503674085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-operator-6bc8ccdb58-bfk7h,Uid:586aa1ae-dc58-46ff-895a-a2ea1cab1d19,Namespace:kube-system,Attempt:0,} returns sandbox id \"03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c\"" Feb 8 23:40:50.505403 env[1342]: time="2024-02-08T23:40:50.505356782Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\"" Feb 8 23:40:51.047385 kubelet[1896]: E0208 23:40:51.047349 1896 pod_workers.go:1300] "Error syncing pod, skipping" err="unmounted volumes=[clustermesh-secrets hubble-tls], unattached volumes=[], failed to process volumes=[]: context canceled" pod="kube-system/cilium-8f6xx" podUID="751817cc-8367-4ff0-bab7-70f13afa8190" Feb 8 23:40:51.068122 kubelet[1896]: E0208 23:40:51.068088 1896 projected.go:267] Couldn't get secret kube-system/hubble-server-certs: failed to sync secret cache: timed out waiting for the condition Feb 8 23:40:51.068290 kubelet[1896]: E0208 23:40:51.068143 1896 projected.go:198] Error preparing data for projected volume hubble-tls for pod kube-system/cilium-8f6xx: failed to sync secret cache: timed out waiting for the condition Feb 8 23:40:51.068290 kubelet[1896]: E0208 23:40:51.068219 1896 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls podName:751817cc-8367-4ff0-bab7-70f13afa8190 nodeName:}" failed. No retries permitted until 2024-02-08 23:40:51.568194472 +0000 UTC m=+76.661922859 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "hubble-tls" (UniqueName: "kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls") pod "cilium-8f6xx" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190") : failed to sync secret cache: timed out waiting for the condition Feb 8 23:40:51.156210 systemd[1]: run-containerd-runc-k8s.io-03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c-runc.CfZiJO.mount: Deactivated successfully. Feb 8 23:40:51.219399 kubelet[1896]: I0208 23:40:51.219357 1896 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="d5958cc2-b53c-4a0c-8987-0bda6989b50c" path="/var/lib/kubelet/pods/d5958cc2-b53c-4a0c-8987-0bda6989b50c/volumes" Feb 8 23:40:51.219994 kubelet[1896]: E0208 23:40:51.219971 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:51.571229 kubelet[1896]: I0208 23:40:51.571176 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-config-path\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571229 kubelet[1896]: I0208 23:40:51.571235 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-bpf-maps\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571272 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrzgc\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-kube-api-access-lrzgc\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571302 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-clustermesh-secrets\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571334 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-lib-modules\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571364 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-net\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571390 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cni-path\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571475 kubelet[1896]: I0208 23:40:51.571419 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-etc-cni-netd\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571452 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-hostproc\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571486 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-kernel\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571519 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-run\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571553 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-xtables-lock\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571597 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-ipsec-secrets\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.571783 kubelet[1896]: I0208 23:40:51.571665 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-cgroup\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.574206 kubelet[1896]: I0208 23:40:51.572075 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cni-path" (OuterVolumeSpecName: "cni-path") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "cni-path". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.575369 kubelet[1896]: I0208 23:40:51.575330 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-config-path" (OuterVolumeSpecName: "cilium-config-path") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "cilium-config-path". PluginName "kubernetes.io/configmap", VolumeGidValue "" Feb 8 23:40:51.582358 systemd[1]: var-lib-kubelet-pods-751817cc\x2d8367\x2d4ff0\x2dbab7\x2d70f13afa8190-volumes-kubernetes.io\x7esecret-cilium\x2dipsec\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:51.586471 kubelet[1896]: I0208 23:40:51.575926 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-etc-cni-netd" (OuterVolumeSpecName: "etc-cni-netd") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "etc-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.586570 kubelet[1896]: I0208 23:40:51.575954 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-hostproc" (OuterVolumeSpecName: "hostproc") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "hostproc". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.586675 kubelet[1896]: I0208 23:40:51.575972 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-kernel" (OuterVolumeSpecName: "host-proc-sys-kernel") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "host-proc-sys-kernel". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.586767 kubelet[1896]: I0208 23:40:51.575991 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-run" (OuterVolumeSpecName: "cilium-run") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "cilium-run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.586850 kubelet[1896]: I0208 23:40:51.576037 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-xtables-lock" (OuterVolumeSpecName: "xtables-lock") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "xtables-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.586930 kubelet[1896]: I0208 23:40:51.579192 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-ipsec-secrets" (OuterVolumeSpecName: "cilium-ipsec-secrets") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "cilium-ipsec-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:51.587137 kubelet[1896]: I0208 23:40:51.579216 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-cgroup" (OuterVolumeSpecName: "cilium-cgroup") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "cilium-cgroup". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.587252 kubelet[1896]: I0208 23:40:51.582150 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-clustermesh-secrets" (OuterVolumeSpecName: "clustermesh-secrets") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "clustermesh-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Feb 8 23:40:51.587356 kubelet[1896]: I0208 23:40:51.582174 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-bpf-maps" (OuterVolumeSpecName: "bpf-maps") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "bpf-maps". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.587445 kubelet[1896]: I0208 23:40:51.586212 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-kube-api-access-lrzgc" (OuterVolumeSpecName: "kube-api-access-lrzgc") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "kube-api-access-lrzgc". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:51.587532 kubelet[1896]: I0208 23:40:51.586242 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.587624 kubelet[1896]: I0208 23:40:51.586264 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-net" (OuterVolumeSpecName: "host-proc-sys-net") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "host-proc-sys-net". PluginName "kubernetes.io/host-path", VolumeGidValue "" Feb 8 23:40:51.588182 systemd[1]: var-lib-kubelet-pods-751817cc\x2d8367\x2d4ff0\x2dbab7\x2d70f13afa8190-volumes-kubernetes.io\x7esecret-clustermesh\x2dsecrets.mount: Deactivated successfully. Feb 8 23:40:51.590516 systemd[1]: var-lib-kubelet-pods-751817cc\x2d8367\x2d4ff0\x2dbab7\x2d70f13afa8190-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlrzgc.mount: Deactivated successfully. Feb 8 23:40:51.672799 kubelet[1896]: I0208 23:40:51.672772 1896 reconciler_common.go:300] "Volume detached for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-clustermesh-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672799 kubelet[1896]: I0208 23:40:51.672799 1896 reconciler_common.go:300] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-lib-modules\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672814 1896 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-net\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672826 1896 reconciler_common.go:300] "Volume detached for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cni-path\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672838 1896 reconciler_common.go:300] "Volume detached for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-etc-cni-netd\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672851 1896 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-lrzgc\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-kube-api-access-lrzgc\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672863 1896 reconciler_common.go:300] "Volume detached for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-xtables-lock\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672880 1896 reconciler_common.go:300] "Volume detached for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-hostproc\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672893 1896 reconciler_common.go:300] "Volume detached for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-host-proc-sys-kernel\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.672957 kubelet[1896]: I0208 23:40:51.672906 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-run\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.673189 kubelet[1896]: I0208 23:40:51.672919 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-ipsec-secrets\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.673189 kubelet[1896]: I0208 23:40:51.672933 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-cgroup\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.673189 kubelet[1896]: I0208 23:40:51.672946 1896 reconciler_common.go:300] "Volume detached for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/751817cc-8367-4ff0-bab7-70f13afa8190-bpf-maps\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.673189 kubelet[1896]: I0208 23:40:51.672959 1896 reconciler_common.go:300] "Volume detached for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/751817cc-8367-4ff0-bab7-70f13afa8190-cilium-config-path\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:51.773503 kubelet[1896]: I0208 23:40:51.773470 1896 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls\") pod \"751817cc-8367-4ff0-bab7-70f13afa8190\" (UID: \"751817cc-8367-4ff0-bab7-70f13afa8190\") " Feb 8 23:40:51.777857 systemd[1]: var-lib-kubelet-pods-751817cc\x2d8367\x2d4ff0\x2dbab7\x2d70f13afa8190-volumes-kubernetes.io\x7eprojected-hubble\x2dtls.mount: Deactivated successfully. Feb 8 23:40:51.778830 kubelet[1896]: I0208 23:40:51.778626 1896 operation_generator.go:878] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls" (OuterVolumeSpecName: "hubble-tls") pod "751817cc-8367-4ff0-bab7-70f13afa8190" (UID: "751817cc-8367-4ff0-bab7-70f13afa8190"). InnerVolumeSpecName "hubble-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Feb 8 23:40:51.874108 kubelet[1896]: I0208 23:40:51.873893 1896 reconciler_common.go:300] "Volume detached for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/751817cc-8367-4ff0-bab7-70f13afa8190-hubble-tls\") on node \"10.200.8.12\" DevicePath \"\"" Feb 8 23:40:52.220845 kubelet[1896]: E0208 23:40:52.220737 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:52.401089 systemd[1]: Removed slice kubepods-burstable-pod751817cc_8367_4ff0_bab7_70f13afa8190.slice. Feb 8 23:40:52.431596 kubelet[1896]: I0208 23:40:52.431564 1896 topology_manager.go:215] "Topology Admit Handler" podUID="3f100900-ef26-4f9e-b28b-5b696af14f9d" podNamespace="kube-system" podName="cilium-x4mzl" Feb 8 23:40:52.438174 systemd[1]: Created slice kubepods-burstable-pod3f100900_ef26_4f9e_b28b_5b696af14f9d.slice. Feb 8 23:40:52.577884 kubelet[1896]: I0208 23:40:52.577851 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bpf-maps\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-bpf-maps\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578139 kubelet[1896]: I0208 23:40:52.578126 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostproc\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-hostproc\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578275 kubelet[1896]: I0208 23:40:52.578265 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-xtables-lock\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578487 kubelet[1896]: I0208 23:40:52.578475 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-run\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-cilium-run\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578634 kubelet[1896]: I0208 23:40:52.578622 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-cgroup\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-cilium-cgroup\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578762 kubelet[1896]: I0208 23:40:52.578754 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-lib-modules\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.578896 kubelet[1896]: I0208 23:40:52.578887 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hubble-tls\" (UniqueName: \"kubernetes.io/projected/3f100900-ef26-4f9e-b28b-5b696af14f9d-hubble-tls\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579051 kubelet[1896]: I0208 23:40:52.579039 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"clustermesh-secrets\" (UniqueName: \"kubernetes.io/secret/3f100900-ef26-4f9e-b28b-5b696af14f9d-clustermesh-secrets\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579197 kubelet[1896]: I0208 23:40:52.579187 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-config-path\" (UniqueName: \"kubernetes.io/configmap/3f100900-ef26-4f9e-b28b-5b696af14f9d-cilium-config-path\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579340 kubelet[1896]: I0208 23:40:52.579331 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-kernel\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-host-proc-sys-kernel\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579468 kubelet[1896]: I0208 23:40:52.579459 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-path\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-cni-path\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579620 kubelet[1896]: I0208 23:40:52.579610 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cni-netd\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-etc-cni-netd\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579754 kubelet[1896]: I0208 23:40:52.579744 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cilium-ipsec-secrets\" (UniqueName: \"kubernetes.io/secret/3f100900-ef26-4f9e-b28b-5b696af14f9d-cilium-ipsec-secrets\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.579890 kubelet[1896]: I0208 23:40:52.579880 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-proc-sys-net\" (UniqueName: \"kubernetes.io/host-path/3f100900-ef26-4f9e-b28b-5b696af14f9d-host-proc-sys-net\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.580049 kubelet[1896]: I0208 23:40:52.580038 1896 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h8ffs\" (UniqueName: \"kubernetes.io/projected/3f100900-ef26-4f9e-b28b-5b696af14f9d-kube-api-access-h8ffs\") pod \"cilium-x4mzl\" (UID: \"3f100900-ef26-4f9e-b28b-5b696af14f9d\") " pod="kube-system/cilium-x4mzl" Feb 8 23:40:52.600949 env[1342]: time="2024-02-08T23:40:52.600905685Z" level=info msg="ImageCreate event &ImageCreate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:52.606053 env[1342]: time="2024-02-08T23:40:52.606019372Z" level=info msg="ImageCreate event &ImageCreate{Name:sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:52.608976 env[1342]: time="2024-02-08T23:40:52.608946036Z" level=info msg="ImageUpdate event &ImageUpdate{Name:quay.io/cilium/operator-generic@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e,Labels:map[string]string{io.cri-containerd.image: managed,},XXX_unrecognized:[],}" Feb 8 23:40:52.609416 env[1342]: time="2024-02-08T23:40:52.609383561Z" level=info msg="PullImage \"quay.io/cilium/operator-generic:v1.12.5@sha256:b296eb7f0f7656a5cc19724f40a8a7121b7fd725278b7d61dc91fe0b7ffd7c0e\" returns image reference \"sha256:ed355de9f59fe391dbe53f3c7c7a60baab3c3a9b7549aa54d10b87fff7dacf7c\"" Feb 8 23:40:52.611303 env[1342]: time="2024-02-08T23:40:52.611274767Z" level=info msg="CreateContainer within sandbox \"03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:0,}" Feb 8 23:40:52.633511 env[1342]: time="2024-02-08T23:40:52.633475113Z" level=info msg="CreateContainer within sandbox \"03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c\" for &ContainerMetadata{Name:cilium-operator,Attempt:0,} returns container id \"3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180\"" Feb 8 23:40:52.633875 env[1342]: time="2024-02-08T23:40:52.633848834Z" level=info msg="StartContainer for \"3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180\"" Feb 8 23:40:52.657800 systemd[1]: Started cri-containerd-3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180.scope. Feb 8 23:40:52.698225 env[1342]: time="2024-02-08T23:40:52.698123441Z" level=info msg="StartContainer for \"3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180\" returns successfully" Feb 8 23:40:52.749629 env[1342]: time="2024-02-08T23:40:52.749576929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4mzl,Uid:3f100900-ef26-4f9e-b28b-5b696af14f9d,Namespace:kube-system,Attempt:0,}" Feb 8 23:40:52.783055 env[1342]: time="2024-02-08T23:40:52.782848797Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Feb 8 23:40:52.783055 env[1342]: time="2024-02-08T23:40:52.782892199Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Feb 8 23:40:52.783055 env[1342]: time="2024-02-08T23:40:52.782907200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Feb 8 23:40:52.783315 env[1342]: time="2024-02-08T23:40:52.783086710Z" level=info msg="starting signal loop" namespace=k8s.io path=/run/containerd/io.containerd.runtime.v2.task/k8s.io/22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a pid=3478 runtime=io.containerd.runc.v2 Feb 8 23:40:52.796135 systemd[1]: Started cri-containerd-22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a.scope. Feb 8 23:40:52.840087 env[1342]: time="2024-02-08T23:40:52.839962102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:cilium-x4mzl,Uid:3f100900-ef26-4f9e-b28b-5b696af14f9d,Namespace:kube-system,Attempt:0,} returns sandbox id \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\"" Feb 8 23:40:52.843210 env[1342]: time="2024-02-08T23:40:52.843174182Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for container &ContainerMetadata{Name:mount-cgroup,Attempt:0,}" Feb 8 23:40:52.874934 env[1342]: time="2024-02-08T23:40:52.874879562Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for &ContainerMetadata{Name:mount-cgroup,Attempt:0,} returns container id \"25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e\"" Feb 8 23:40:52.875547 env[1342]: time="2024-02-08T23:40:52.875519098Z" level=info msg="StartContainer for \"25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e\"" Feb 8 23:40:52.891726 systemd[1]: Started cri-containerd-25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e.scope. Feb 8 23:40:52.921146 env[1342]: time="2024-02-08T23:40:52.921088255Z" level=info msg="StartContainer for \"25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e\" returns successfully" Feb 8 23:40:52.926539 systemd[1]: cri-containerd-25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e.scope: Deactivated successfully. Feb 8 23:40:53.429217 kubelet[1896]: E0208 23:40:53.221108 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:53.429217 kubelet[1896]: I0208 23:40:53.409270 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-operator-6bc8ccdb58-bfk7h" podStartSLOduration=2.304673951 podCreationTimestamp="2024-02-08 23:40:49 +0000 UTC" firstStartedPulling="2024-02-08 23:40:50.505092866 +0000 UTC m=+75.598821253" lastFinishedPulling="2024-02-08 23:40:52.609651676 +0000 UTC m=+77.703380063" observedRunningTime="2024-02-08 23:40:53.409158457 +0000 UTC m=+78.502886944" watchObservedRunningTime="2024-02-08 23:40:53.409232761 +0000 UTC m=+78.502961248" Feb 8 23:40:53.431288 kubelet[1896]: I0208 23:40:53.430450 1896 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="751817cc-8367-4ff0-bab7-70f13afa8190" path="/var/lib/kubelet/pods/751817cc-8367-4ff0-bab7-70f13afa8190/volumes" Feb 8 23:40:53.441540 env[1342]: time="2024-02-08T23:40:53.441490048Z" level=info msg="shim disconnected" id=25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e Feb 8 23:40:53.441708 env[1342]: time="2024-02-08T23:40:53.441547351Z" level=warning msg="cleaning up after shim disconnected" id=25c38eb1f6cc5187ee45d99602909226cb08549d6a504e04d898839f9c11b80e namespace=k8s.io Feb 8 23:40:53.441708 env[1342]: time="2024-02-08T23:40:53.441561552Z" level=info msg="cleaning up dead shim" Feb 8 23:40:53.455517 env[1342]: time="2024-02-08T23:40:53.455479823Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:53Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3563 runtime=io.containerd.runc.v2\n" Feb 8 23:40:54.221479 kubelet[1896]: E0208 23:40:54.221415 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:54.406863 env[1342]: time="2024-02-08T23:40:54.406822954Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for container &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,}" Feb 8 23:40:54.433327 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1226531992.mount: Deactivated successfully. Feb 8 23:40:54.443143 env[1342]: time="2024-02-08T23:40:54.443100239Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for &ContainerMetadata{Name:apply-sysctl-overwrites,Attempt:0,} returns container id \"e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7\"" Feb 8 23:40:54.443658 env[1342]: time="2024-02-08T23:40:54.443626268Z" level=info msg="StartContainer for \"e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7\"" Feb 8 23:40:54.469326 systemd[1]: Started cri-containerd-e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7.scope. Feb 8 23:40:54.502235 systemd[1]: cri-containerd-e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7.scope: Deactivated successfully. Feb 8 23:40:54.503276 env[1342]: time="2024-02-08T23:40:54.502984615Z" level=info msg="StartContainer for \"e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7\" returns successfully" Feb 8 23:40:54.530680 env[1342]: time="2024-02-08T23:40:54.530628628Z" level=info msg="shim disconnected" id=e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7 Feb 8 23:40:54.530680 env[1342]: time="2024-02-08T23:40:54.530676330Z" level=warning msg="cleaning up after shim disconnected" id=e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7 namespace=k8s.io Feb 8 23:40:54.530946 env[1342]: time="2024-02-08T23:40:54.530687831Z" level=info msg="cleaning up dead shim" Feb 8 23:40:54.538575 env[1342]: time="2024-02-08T23:40:54.538539361Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:54Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3628 runtime=io.containerd.runc.v2\n" Feb 8 23:40:55.166523 kubelet[1896]: E0208 23:40:55.166442 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:55.221875 kubelet[1896]: E0208 23:40:55.221797 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:55.267479 kubelet[1896]: E0208 23:40:55.267446 1896 kubelet.go:2855] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Feb 8 23:40:55.410994 env[1342]: time="2024-02-08T23:40:55.410928712Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for container &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,}" Feb 8 23:40:55.427110 systemd[1]: run-containerd-runc-k8s.io-e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7-runc.5pWRo6.mount: Deactivated successfully. Feb 8 23:40:55.427227 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e063900dd703b553015b6ce595d06ada1c92ceec60ac41ddd9d90eceddd7deb7-rootfs.mount: Deactivated successfully. Feb 8 23:40:55.436408 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3835508557.mount: Deactivated successfully. Feb 8 23:40:55.450986 env[1342]: time="2024-02-08T23:40:55.450949175Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for &ContainerMetadata{Name:mount-bpf-fs,Attempt:0,} returns container id \"2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3\"" Feb 8 23:40:55.451529 env[1342]: time="2024-02-08T23:40:55.451497005Z" level=info msg="StartContainer for \"2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3\"" Feb 8 23:40:55.469677 systemd[1]: Started cri-containerd-2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3.scope. Feb 8 23:40:55.502559 systemd[1]: cri-containerd-2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3.scope: Deactivated successfully. Feb 8 23:40:55.505195 env[1342]: time="2024-02-08T23:40:55.505150604Z" level=info msg="StartContainer for \"2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3\" returns successfully" Feb 8 23:40:55.536543 env[1342]: time="2024-02-08T23:40:55.536491597Z" level=info msg="shim disconnected" id=2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3 Feb 8 23:40:55.536543 env[1342]: time="2024-02-08T23:40:55.536542200Z" level=warning msg="cleaning up after shim disconnected" id=2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3 namespace=k8s.io Feb 8 23:40:55.536824 env[1342]: time="2024-02-08T23:40:55.536553301Z" level=info msg="cleaning up dead shim" Feb 8 23:40:55.544530 env[1342]: time="2024-02-08T23:40:55.544493030Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:55Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3686 runtime=io.containerd.runc.v2\n" Feb 8 23:40:56.222041 kubelet[1896]: E0208 23:40:56.221958 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:56.415031 env[1342]: time="2024-02-08T23:40:56.414975996Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for container &ContainerMetadata{Name:clean-cilium-state,Attempt:0,}" Feb 8 23:40:56.427183 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2fbafa0670dc37e273b1ba0150a723b2d68c3b205cfd8a439a5cc9e727fc2ed3-rootfs.mount: Deactivated successfully. Feb 8 23:40:56.445654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4073367391.mount: Deactivated successfully. Feb 8 23:40:56.459479 env[1342]: time="2024-02-08T23:40:56.459437669Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for &ContainerMetadata{Name:clean-cilium-state,Attempt:0,} returns container id \"ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac\"" Feb 8 23:40:56.459933 env[1342]: time="2024-02-08T23:40:56.459904294Z" level=info msg="StartContainer for \"ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac\"" Feb 8 23:40:56.476507 systemd[1]: Started cri-containerd-ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac.scope. Feb 8 23:40:56.502130 systemd[1]: cri-containerd-ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac.scope: Deactivated successfully. Feb 8 23:40:56.504855 env[1342]: time="2024-02-08T23:40:56.504789990Z" level=warning msg="error from *cgroupsv2.Manager.EventChan" error="failed to add inotify watch for \"/sys/fs/cgroup/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f100900_ef26_4f9e_b28b_5b696af14f9d.slice/cri-containerd-ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac.scope/memory.events\": no such file or directory" Feb 8 23:40:56.508780 env[1342]: time="2024-02-08T23:40:56.508676198Z" level=info msg="StartContainer for \"ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac\" returns successfully" Feb 8 23:40:56.535386 env[1342]: time="2024-02-08T23:40:56.535340421Z" level=info msg="shim disconnected" id=ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac Feb 8 23:40:56.535620 env[1342]: time="2024-02-08T23:40:56.535397324Z" level=warning msg="cleaning up after shim disconnected" id=ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac namespace=k8s.io Feb 8 23:40:56.535620 env[1342]: time="2024-02-08T23:40:56.535412325Z" level=info msg="cleaning up dead shim" Feb 8 23:40:56.542072 env[1342]: time="2024-02-08T23:40:56.542040479Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:40:56Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=3743 runtime=io.containerd.runc.v2\n" Feb 8 23:40:57.223043 kubelet[1896]: E0208 23:40:57.222996 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:57.420284 env[1342]: time="2024-02-08T23:40:57.420231796Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for container &ContainerMetadata{Name:cilium-agent,Attempt:0,}" Feb 8 23:40:57.427212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ff12e80274e3c75a5de9cc79eead79f151353d45005e664ccc89c17c33619aac-rootfs.mount: Deactivated successfully. Feb 8 23:40:57.452529 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount233781762.mount: Deactivated successfully. Feb 8 23:40:57.467201 env[1342]: time="2024-02-08T23:40:57.467150371Z" level=info msg="CreateContainer within sandbox \"22663e88462d05144b16f3857d82536d6b2e8716cbf8e5bdbfba402c6e80fd7a\" for &ContainerMetadata{Name:cilium-agent,Attempt:0,} returns container id \"47f96b9318f3e39483fe5a59e75356bf0c970488f3b9994b9c9ca188480507c2\"" Feb 8 23:40:57.467803 env[1342]: time="2024-02-08T23:40:57.467765604Z" level=info msg="StartContainer for \"47f96b9318f3e39483fe5a59e75356bf0c970488f3b9994b9c9ca188480507c2\"" Feb 8 23:40:57.486194 systemd[1]: Started cri-containerd-47f96b9318f3e39483fe5a59e75356bf0c970488f3b9994b9c9ca188480507c2.scope. Feb 8 23:40:57.521610 env[1342]: time="2024-02-08T23:40:57.521557541Z" level=info msg="StartContainer for \"47f96b9318f3e39483fe5a59e75356bf0c970488f3b9994b9c9ca188480507c2\" returns successfully" Feb 8 23:40:57.846039 kernel: alg: No test for seqiv(rfc4106(gcm(aes))) (seqiv(rfc4106-gcm-aesni)) Feb 8 23:40:58.223522 kubelet[1896]: E0208 23:40:58.223308 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:40:58.444597 kubelet[1896]: I0208 23:40:58.444558 1896 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/cilium-x4mzl" podStartSLOduration=6.444520554 podCreationTimestamp="2024-02-08 23:40:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-02-08 23:40:58.444358746 +0000 UTC m=+83.538087233" watchObservedRunningTime="2024-02-08 23:40:58.444520554 +0000 UTC m=+83.538248941" Feb 8 23:40:58.580575 kubelet[1896]: I0208 23:40:58.580531 1896 setters.go:552] "Node became not ready" node="10.200.8.12" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2024-02-08T23:40:58Z","lastTransitionTime":"2024-02-08T23:40:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"} Feb 8 23:40:59.224372 kubelet[1896]: E0208 23:40:59.224339 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:00.225515 kubelet[1896]: E0208 23:41:00.225431 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:00.357608 systemd-networkd[1490]: lxc_health: Link UP Feb 8 23:41:00.373034 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): lxc_health: link becomes ready Feb 8 23:41:00.374672 systemd-networkd[1490]: lxc_health: Gained carrier Feb 8 23:41:01.225883 kubelet[1896]: E0208 23:41:01.225843 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:02.199254 systemd-networkd[1490]: lxc_health: Gained IPv6LL Feb 8 23:41:02.226394 kubelet[1896]: E0208 23:41:02.226361 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:03.227428 kubelet[1896]: E0208 23:41:03.227356 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:04.228211 kubelet[1896]: E0208 23:41:04.228160 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:04.950638 systemd[1]: run-containerd-runc-k8s.io-47f96b9318f3e39483fe5a59e75356bf0c970488f3b9994b9c9ca188480507c2-runc.yNXug5.mount: Deactivated successfully. Feb 8 23:41:05.229444 kubelet[1896]: E0208 23:41:05.229230 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:06.229738 kubelet[1896]: E0208 23:41:06.229679 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:07.230469 kubelet[1896]: E0208 23:41:07.230419 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:08.231453 kubelet[1896]: E0208 23:41:08.231380 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:09.231871 kubelet[1896]: E0208 23:41:09.231837 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:10.232613 kubelet[1896]: E0208 23:41:10.232556 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:11.233196 kubelet[1896]: E0208 23:41:11.233099 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:12.233598 kubelet[1896]: E0208 23:41:12.233536 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:13.234363 kubelet[1896]: E0208 23:41:13.234312 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:14.235196 kubelet[1896]: E0208 23:41:14.235087 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:15.166240 kubelet[1896]: E0208 23:41:15.166184 1896 file.go:104] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:15.235583 kubelet[1896]: E0208 23:41:15.235529 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:16.236532 kubelet[1896]: E0208 23:41:16.236474 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:17.237465 kubelet[1896]: E0208 23:41:17.237409 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:18.238521 kubelet[1896]: E0208 23:41:18.238474 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:19.239464 kubelet[1896]: E0208 23:41:19.239399 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:20.240114 kubelet[1896]: E0208 23:41:20.240057 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:21.241172 kubelet[1896]: E0208 23:41:21.241133 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:22.242380 kubelet[1896]: E0208 23:41:22.242291 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:23.242487 kubelet[1896]: E0208 23:41:23.242432 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:24.242786 kubelet[1896]: E0208 23:41:24.242685 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:25.243617 kubelet[1896]: E0208 23:41:25.243581 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:26.244488 kubelet[1896]: E0208 23:41:26.244431 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:27.244835 kubelet[1896]: E0208 23:41:27.244779 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.245922 kubelet[1896]: E0208 23:41:28.245857 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:28.846312 systemd[1]: cri-containerd-3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180.scope: Deactivated successfully. Feb 8 23:41:28.866244 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180-rootfs.mount: Deactivated successfully. Feb 8 23:41:28.883711 env[1342]: time="2024-02-08T23:41:28.883656727Z" level=info msg="shim disconnected" id=3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180 Feb 8 23:41:28.883711 env[1342]: time="2024-02-08T23:41:28.883711530Z" level=warning msg="cleaning up after shim disconnected" id=3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180 namespace=k8s.io Feb 8 23:41:28.884263 env[1342]: time="2024-02-08T23:41:28.883723530Z" level=info msg="cleaning up dead shim" Feb 8 23:41:28.891773 env[1342]: time="2024-02-08T23:41:28.891728754Z" level=warning msg="cleanup warnings time=\"2024-02-08T23:41:28Z\" level=info msg=\"starting signal loop\" namespace=k8s.io pid=4436 runtime=io.containerd.runc.v2\n" Feb 8 23:41:28.996782 kubelet[1896]: E0208 23:41:28.996731 1896 kubelet_node_status.go:540] "Error updating node status, will retry" err="error getting node \"10.200.8.12\": Get \"https://10.200.8.36:6443/api/v1/nodes/10.200.8.12?resourceVersion=0&timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Feb 8 23:41:29.246780 kubelet[1896]: E0208 23:41:29.246656 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:29.425127 kubelet[1896]: E0208 23:41:29.425081 1896 controller.go:193] "Failed to update lease" err="Put \"https://10.200.8.36:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/10.200.8.12?timeout=10s\": context deadline exceeded" Feb 8 23:41:29.487791 kubelet[1896]: I0208 23:41:29.487745 1896 scope.go:117] "RemoveContainer" containerID="3c9de6724d69dfddf8ff27e703136137f39148a60aaa3d4f7ae0904c4210e180" Feb 8 23:41:29.490032 env[1342]: time="2024-02-08T23:41:29.489964648Z" level=info msg="CreateContainer within sandbox \"03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c\" for container &ContainerMetadata{Name:cilium-operator,Attempt:1,}" Feb 8 23:41:29.512672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2570412045.mount: Deactivated successfully. Feb 8 23:41:29.519085 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2080594512.mount: Deactivated successfully. Feb 8 23:41:29.530969 env[1342]: time="2024-02-08T23:41:29.530929596Z" level=info msg="CreateContainer within sandbox \"03ad23f0bbf2b088eca578bf4fb7484a6af59688cead3a6cf18dff04df2bf59c\" for &ContainerMetadata{Name:cilium-operator,Attempt:1,} returns container id \"1a41aa32ec2c32ae046d5bec4361fbba1951aeb4ef2d8e2c28f877a74b26b45a\"" Feb 8 23:41:29.531380 env[1342]: time="2024-02-08T23:41:29.531354414Z" level=info msg="StartContainer for \"1a41aa32ec2c32ae046d5bec4361fbba1951aeb4ef2d8e2c28f877a74b26b45a\"" Feb 8 23:41:29.546590 systemd[1]: Started cri-containerd-1a41aa32ec2c32ae046d5bec4361fbba1951aeb4ef2d8e2c28f877a74b26b45a.scope. Feb 8 23:41:29.580630 env[1342]: time="2024-02-08T23:41:29.580589394Z" level=info msg="StartContainer for \"1a41aa32ec2c32ae046d5bec4361fbba1951aeb4ef2d8e2c28f877a74b26b45a\" returns successfully" Feb 8 23:41:30.247892 kubelet[1896]: E0208 23:41:30.247835 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:31.248036 kubelet[1896]: E0208 23:41:31.247960 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:31.971041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:31.983851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:31.996711 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.010123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.024171 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.038241 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.038520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.049149 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.049530 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.062302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.062931 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.076291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.078328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.089525 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.089728 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.102919 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.109257 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.122689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.135308 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.141523 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.148163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.148280 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.148436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.148543 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.171875 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.172161 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.188574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.188809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.210237 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.210463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.220163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.220385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.238222 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.238436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.248912 kubelet[1896]: E0208 23:41:32.248812 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:32.256413 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.256604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.271213 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.271538 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.276764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.276947 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.286672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.286864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.296625 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.296818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.306572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.306800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.316518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.316713 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.325876 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.326079 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.336499 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.336703 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.346044 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.346228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.355677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.355870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.365205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.365399 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.374576 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.374761 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.388745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.389110 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.389281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.393980 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.408389 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.436846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437025 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437166 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437303 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437671 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.437802 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.441462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.456821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.466587 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.466719 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.466843 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.466976 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.471623 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.481501 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.481699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.490900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.491100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.500590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.500800 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.510480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.510698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.519579 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.519781 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.528777 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.529001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.538028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.538216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.547205 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.547405 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.556698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.556903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.566500 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.566699 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.576320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.576516 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.585751 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.585952 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.594922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.595128 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.604365 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.604556 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.618336 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.618653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.618810 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.623419 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.632720 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.632918 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.642063 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.642265 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.651202 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.651398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.660362 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.660553 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.669903 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.670116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.679792 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.679987 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.688958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.689163 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.698132 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.698322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.707157 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.707349 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.716455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.716647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.721023 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.730350 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.735394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.735584 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.745407 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.745604 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.754474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.754658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.763856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.764076 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.776181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.776380 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.786890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.787250 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.792292 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.792489 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.801414 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.801613 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.810913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.811129 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.819899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.820105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.829256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.829455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.843730 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.844102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.848871 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.849095 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.858668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.858873 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.867863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.868066 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.878406 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.878611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.888370 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.888580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.902938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.903328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.908457 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.908664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.917804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.917997 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.927395 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.927594 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.936796 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.936991 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.951297 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.951544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.951683 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.956021 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.965803 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.966014 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.974856 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.975067 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.984956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.985142 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.994798 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:32.994996 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.004251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.004442 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.013704 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.013907 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.023794 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.024001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.033342 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.033539 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.042716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.042928 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.051942 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.052193 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.061437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.061611 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.076340 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.076681 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.081672 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.081909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.091668 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.091870 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.101300 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.101497 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.115120 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.125057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.125279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.125424 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.125559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.129943 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.139799 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.139994 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.149287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.149490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.158958 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.159197 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.168658 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.168846 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.178178 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.178376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.187439 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.187627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.197096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.197293 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.206534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.206729 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.216053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.216267 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.225400 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.225607 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.234956 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.235158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.249108 kubelet[1896]: E0208 23:41:33.249054 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:33.249595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.249940 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.255216 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.255441 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.264844 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.265073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.279381 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.279558 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.279689 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.284415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.293737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.293929 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.303404 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.303615 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.313053 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.313283 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.327276 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.327473 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.327619 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.331999 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.341433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.341641 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.350966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.351162 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.360698 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.360899 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.375479 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.375664 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.375804 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.384978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.413561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.413714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.413852 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.413982 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.414124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.414253 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.414384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.418040 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.432773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.442435 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.451913 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.452073 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.452215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.452359 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.452490 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.456398 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.465627 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.465824 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.474995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.475192 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.484914 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.485124 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.494104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.494306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.507024 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.507211 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.512821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.513033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.521984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.522181 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.535966 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.536318 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.541561 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.541763 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.550948 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.551148 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.564893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.565108 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.565254 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.569628 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.578821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.579001 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.588746 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.588946 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.598707 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.598908 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.608096 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.608327 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.618062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.618255 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.627386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.627573 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.636905 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.637104 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.651373 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.651827 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.651988 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.656653 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.665922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.666146 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.675445 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.675634 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.684969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.685177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.694731 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.694921 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.709448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.709809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.714786 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.714978 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.724390 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.724585 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.734028 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.734215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.743887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.744087 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.753453 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.753646 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.768057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.768387 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.773355 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.773572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.783256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.783456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.792660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.792848 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.802549 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.802749 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.812048 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.812243 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.826456 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.826840 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.832030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.832240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.841493 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.841660 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.851316 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.851514 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.860771 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.860969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.870291 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.870483 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.884742 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.885100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.885271 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.894750 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.913533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.932544 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.932690 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.932819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.932927 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.933060 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.933182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.933313 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.933444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.937385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.947590 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.947791 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.957694 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.957898 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.967325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.967520 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.972030 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.981302 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.991107 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.991251 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.991394 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:33.995492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.005618 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.005818 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.014969 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.015170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.024314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.024505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.033512 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.033700 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.042819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.043046 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.052139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.052337 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.062335 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.062536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.072240 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.072436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.086779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163131 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163281 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163436 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163580 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163717 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.163981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164127 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164256 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164386 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164518 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164639 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.164890 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.165032 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.165165 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.165287 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.165415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.167705 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.177170 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.177384 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.181895 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.186648 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.195977 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.196187 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.210984 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.211183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.211322 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.220158 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.220347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.229454 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.229651 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.238813 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.239033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.243737 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.248392 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.249943 kubelet[1896]: E0208 23:41:34.249895 1896 file_linux.go:61] "Unable to read config path" err="path does not exist, ignoring" path="/etc/kubernetes/manifests" Feb 8 23:41:34.253217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.262637 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.262897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.271559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.271766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.281912 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.282182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.286533 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.291505 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.300693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.300893 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.310177 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.310376 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.324437 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.324647 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.324788 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.333684 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.333872 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.342863 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.343071 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.352191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.357574 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.357776 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.366767 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.366974 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.376230 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.376427 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.385740 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.385935 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.394892 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#110 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.395101 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#302 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.404275 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#109 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.404469 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#108 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.413325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#107 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.413515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#106 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.422444 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#105 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Feb 8 23:41:34.422652 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#104 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001