Jul 2 07:02:49.004603 kernel: Linux version 6.1.96-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 13.2.1_p20230826 p7) 13.2.1 20230826, GNU ld (Gentoo 2.40 p5) 2.40.0) #1 SMP PREEMPT_DYNAMIC Mon Jul 1 23:29:55 -00 2024 Jul 2 07:02:49.004642 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:02:49.004656 kernel: BIOS-provided physical RAM map: Jul 2 07:02:49.004666 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable Jul 2 07:02:49.004676 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved Jul 2 07:02:49.004699 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable Jul 2 07:02:49.004713 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ff70fff] type 20 Jul 2 07:02:49.004730 kernel: BIOS-e820: [mem 0x000000003ff71000-0x000000003ffc8fff] reserved Jul 2 07:02:49.004742 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data Jul 2 07:02:49.004752 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS Jul 2 07:02:49.004763 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable Jul 2 07:02:49.004774 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable Jul 2 07:02:49.004784 kernel: printk: bootconsole [earlyser0] enabled Jul 2 07:02:49.004794 kernel: NX (Execute Disable) protection: active Jul 2 07:02:49.004810 kernel: efi: EFI v2.70 by Microsoft Jul 2 07:02:49.004822 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c8a98 Jul 2 07:02:49.004835 kernel: SMBIOS 3.1.0 present. Jul 2 07:02:49.004847 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 Jul 2 07:02:49.004860 kernel: Hypervisor detected: Microsoft Hyper-V Jul 2 07:02:49.004871 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 Jul 2 07:02:49.004884 kernel: Hyper-V: Host Build 10.0.20348.1633-1-0 Jul 2 07:02:49.004897 kernel: Hyper-V: Nested features: 0x1e0101 Jul 2 07:02:49.004909 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 Jul 2 07:02:49.004921 kernel: Hyper-V: Using hypercall for remote TLB flush Jul 2 07:02:49.004936 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns Jul 2 07:02:49.004947 kernel: tsc: Marking TSC unstable due to running on Hyper-V Jul 2 07:02:49.004958 kernel: tsc: Detected 2593.905 MHz processor Jul 2 07:02:49.004970 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Jul 2 07:02:49.004982 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Jul 2 07:02:49.004995 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 Jul 2 07:02:49.005006 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Jul 2 07:02:49.005018 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved Jul 2 07:02:49.005031 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 Jul 2 07:02:49.005046 kernel: Using GB pages for direct mapping Jul 2 07:02:49.005058 kernel: Secure boot disabled Jul 2 07:02:49.005070 kernel: ACPI: Early table checksum verification disabled Jul 2 07:02:49.005083 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) Jul 2 07:02:49.005094 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005108 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005121 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) Jul 2 07:02:49.005141 kernel: ACPI: FACS 0x000000003FFFE000 000040 Jul 2 07:02:49.005162 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005176 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005189 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005202 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005216 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005229 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005247 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) Jul 2 07:02:49.005261 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] Jul 2 07:02:49.005274 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] Jul 2 07:02:49.005288 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] Jul 2 07:02:49.005304 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] Jul 2 07:02:49.005317 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] Jul 2 07:02:49.005330 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] Jul 2 07:02:49.005343 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] Jul 2 07:02:49.005362 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] Jul 2 07:02:49.005377 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] Jul 2 07:02:49.005389 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] Jul 2 07:02:49.005402 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 Jul 2 07:02:49.005415 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 Jul 2 07:02:49.005429 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug Jul 2 07:02:49.005443 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug Jul 2 07:02:49.005456 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug Jul 2 07:02:49.005470 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug Jul 2 07:02:49.005486 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug Jul 2 07:02:49.005500 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug Jul 2 07:02:49.005513 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug Jul 2 07:02:49.005526 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug Jul 2 07:02:49.005540 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug Jul 2 07:02:49.005553 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug Jul 2 07:02:49.005565 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug Jul 2 07:02:49.005578 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug Jul 2 07:02:49.005591 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug Jul 2 07:02:49.005610 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug Jul 2 07:02:49.005623 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug Jul 2 07:02:49.005636 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug Jul 2 07:02:49.005649 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] Jul 2 07:02:49.005662 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] Jul 2 07:02:49.005676 kernel: Zone ranges: Jul 2 07:02:49.005698 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Jul 2 07:02:49.005709 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Jul 2 07:02:49.005720 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:02:49.005733 kernel: Movable zone start for each node Jul 2 07:02:49.005744 kernel: Early memory node ranges Jul 2 07:02:49.005754 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] Jul 2 07:02:49.005765 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] Jul 2 07:02:49.005776 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] Jul 2 07:02:49.005787 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] Jul 2 07:02:49.005799 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] Jul 2 07:02:49.005812 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Jul 2 07:02:49.005824 kernel: On node 0, zone DMA: 96 pages in unavailable ranges Jul 2 07:02:49.005840 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges Jul 2 07:02:49.005853 kernel: ACPI: PM-Timer IO Port: 0x408 Jul 2 07:02:49.005864 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) Jul 2 07:02:49.005876 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 Jul 2 07:02:49.005888 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Jul 2 07:02:49.005901 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Jul 2 07:02:49.005913 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 Jul 2 07:02:49.005925 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs Jul 2 07:02:49.005953 kernel: [mem 0x40000000-0xffffffff] available for PCI devices Jul 2 07:02:49.005964 kernel: Booting paravirtualized kernel on Hyper-V Jul 2 07:02:49.005974 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Jul 2 07:02:49.005981 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Jul 2 07:02:49.005988 kernel: percpu: Embedded 57 pages/cpu s194792 r8192 d30488 u1048576 Jul 2 07:02:49.005995 kernel: pcpu-alloc: s194792 r8192 d30488 u1048576 alloc=1*2097152 Jul 2 07:02:49.006002 kernel: pcpu-alloc: [0] 0 1 Jul 2 07:02:49.006009 kernel: Hyper-V: PV spinlocks enabled Jul 2 07:02:49.006016 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Jul 2 07:02:49.006023 kernel: Fallback order for Node 0: 0 Jul 2 07:02:49.006031 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 Jul 2 07:02:49.006038 kernel: Policy zone: Normal Jul 2 07:02:49.006046 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:02:49.006054 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Jul 2 07:02:49.006061 kernel: random: crng init done Jul 2 07:02:49.006071 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jul 2 07:02:49.006078 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jul 2 07:02:49.006088 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jul 2 07:02:49.006097 kernel: software IO TLB: area num 2. Jul 2 07:02:49.006114 kernel: Memory: 8072996K/8387460K available (12293K kernel code, 2301K rwdata, 19992K rodata, 47156K init, 4308K bss, 314204K reserved, 0K cma-reserved) Jul 2 07:02:49.006126 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jul 2 07:02:49.006134 kernel: ftrace: allocating 36081 entries in 141 pages Jul 2 07:02:49.006144 kernel: ftrace: allocated 141 pages with 4 groups Jul 2 07:02:49.006158 kernel: Dynamic Preempt: voluntary Jul 2 07:02:49.006167 kernel: rcu: Preemptible hierarchical RCU implementation. Jul 2 07:02:49.006175 kernel: rcu: RCU event tracing is enabled. Jul 2 07:02:49.006183 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jul 2 07:02:49.006190 kernel: Trampoline variant of Tasks RCU enabled. Jul 2 07:02:49.006198 kernel: Rude variant of Tasks RCU enabled. Jul 2 07:02:49.006207 kernel: Tracing variant of Tasks RCU enabled. Jul 2 07:02:49.006217 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jul 2 07:02:49.006225 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jul 2 07:02:49.006232 kernel: Using NULL legacy PIC Jul 2 07:02:49.006242 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 Jul 2 07:02:49.006253 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jul 2 07:02:49.006262 kernel: Console: colour dummy device 80x25 Jul 2 07:02:49.006271 kernel: printk: console [tty1] enabled Jul 2 07:02:49.006278 kernel: printk: console [ttyS0] enabled Jul 2 07:02:49.006288 kernel: printk: bootconsole [earlyser0] disabled Jul 2 07:02:49.006296 kernel: ACPI: Core revision 20220331 Jul 2 07:02:49.006303 kernel: Failed to register legacy timer interrupt Jul 2 07:02:49.006313 kernel: APIC: Switch to symmetric I/O mode setup Jul 2 07:02:49.006321 kernel: Hyper-V: enabling crash_kexec_post_notifiers Jul 2 07:02:49.006330 kernel: Hyper-V: Using IPI hypercalls Jul 2 07:02:49.006341 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) Jul 2 07:02:49.006348 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 Jul 2 07:02:49.006359 kernel: Last level dTLB entries: 4KB 64, 2MB 0, 4MB 0, 1GB 4 Jul 2 07:02:49.006366 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Jul 2 07:02:49.006374 kernel: Spectre V2 : Mitigation: Retpolines Jul 2 07:02:49.006384 kernel: Spectre V2 : Spectre v2 / SpectreRSB mitigation: Filling RSB on context switch Jul 2 07:02:49.006391 kernel: Spectre V2 : Spectre v2 / SpectreRSB : Filling RSB on VMEXIT Jul 2 07:02:49.006400 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! Jul 2 07:02:49.006409 kernel: RETBleed: Vulnerable Jul 2 07:02:49.006418 kernel: Speculative Store Bypass: Vulnerable Jul 2 07:02:49.006428 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:02:49.006436 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode Jul 2 07:02:49.006443 kernel: GDS: Unknown: Dependent on hypervisor status Jul 2 07:02:49.006453 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Jul 2 07:02:49.006460 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Jul 2 07:02:49.006469 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Jul 2 07:02:49.006478 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' Jul 2 07:02:49.006487 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' Jul 2 07:02:49.006495 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' Jul 2 07:02:49.006502 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Jul 2 07:02:49.006514 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 Jul 2 07:02:49.006521 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 Jul 2 07:02:49.006529 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 Jul 2 07:02:49.006539 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. Jul 2 07:02:49.006546 kernel: Freeing SMP alternatives memory: 32K Jul 2 07:02:49.006555 kernel: pid_max: default: 32768 minimum: 301 Jul 2 07:02:49.006563 kernel: LSM: Security Framework initializing Jul 2 07:02:49.006570 kernel: SELinux: Initializing. Jul 2 07:02:49.006580 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:02:49.006588 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) Jul 2 07:02:49.006596 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) Jul 2 07:02:49.006606 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:02:49.006615 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:02:49.006625 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:02:49.006633 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:02:49.006640 kernel: cblist_init_generic: Setting adjustable number of callback queues. Jul 2 07:02:49.006650 kernel: cblist_init_generic: Setting shift to 1 and lim to 1. Jul 2 07:02:49.006658 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. Jul 2 07:02:49.006665 kernel: signal: max sigframe size: 3632 Jul 2 07:02:49.006675 kernel: rcu: Hierarchical SRCU implementation. Jul 2 07:02:49.006682 kernel: rcu: Max phase no-delay instances is 400. Jul 2 07:02:49.006743 kernel: NMI watchdog: Perf NMI watchdog permanently disabled Jul 2 07:02:49.006753 kernel: smp: Bringing up secondary CPUs ... Jul 2 07:02:49.006762 kernel: x86: Booting SMP configuration: Jul 2 07:02:49.006771 kernel: .... node #0, CPUs: #1 Jul 2 07:02:49.006779 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. Jul 2 07:02:49.006790 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. Jul 2 07:02:49.006797 kernel: smp: Brought up 1 node, 2 CPUs Jul 2 07:02:49.006805 kernel: smpboot: Max logical packages: 1 Jul 2 07:02:49.006815 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) Jul 2 07:02:49.006824 kernel: devtmpfs: initialized Jul 2 07:02:49.006835 kernel: x86/mm: Memory block size: 128MB Jul 2 07:02:49.006842 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) Jul 2 07:02:49.006850 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jul 2 07:02:49.006860 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jul 2 07:02:49.006869 kernel: pinctrl core: initialized pinctrl subsystem Jul 2 07:02:49.006878 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jul 2 07:02:49.006886 kernel: audit: initializing netlink subsys (disabled) Jul 2 07:02:49.006895 kernel: audit: type=2000 audit(1719903767.030:1): state=initialized audit_enabled=0 res=1 Jul 2 07:02:49.006905 kernel: thermal_sys: Registered thermal governor 'step_wise' Jul 2 07:02:49.006913 kernel: thermal_sys: Registered thermal governor 'user_space' Jul 2 07:02:49.006923 kernel: cpuidle: using governor menu Jul 2 07:02:49.006931 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jul 2 07:02:49.006940 kernel: dca service started, version 1.12.1 Jul 2 07:02:49.006949 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] Jul 2 07:02:49.006956 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Jul 2 07:02:49.006967 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jul 2 07:02:49.006974 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Jul 2 07:02:49.006986 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jul 2 07:02:49.006994 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Jul 2 07:02:49.007002 kernel: ACPI: Added _OSI(Module Device) Jul 2 07:02:49.007012 kernel: ACPI: Added _OSI(Processor Device) Jul 2 07:02:49.007020 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) Jul 2 07:02:49.007028 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jul 2 07:02:49.007038 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jul 2 07:02:49.007045 kernel: ACPI: Interpreter enabled Jul 2 07:02:49.007054 kernel: ACPI: PM: (supports S0 S5) Jul 2 07:02:49.007065 kernel: ACPI: Using IOAPIC for interrupt routing Jul 2 07:02:49.007073 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Jul 2 07:02:49.007083 kernel: PCI: Ignoring E820 reservations for host bridge windows Jul 2 07:02:49.007090 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F Jul 2 07:02:49.007099 kernel: iommu: Default domain type: Translated Jul 2 07:02:49.007108 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Jul 2 07:02:49.007116 kernel: pps_core: LinuxPPS API ver. 1 registered Jul 2 07:02:49.007125 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jul 2 07:02:49.007135 kernel: PTP clock support registered Jul 2 07:02:49.007145 kernel: Registered efivars operations Jul 2 07:02:49.007154 kernel: PCI: Using ACPI for IRQ routing Jul 2 07:02:49.007161 kernel: PCI: System does not support PCI Jul 2 07:02:49.007171 kernel: vgaarb: loaded Jul 2 07:02:49.007179 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page Jul 2 07:02:49.007187 kernel: VFS: Disk quotas dquot_6.6.0 Jul 2 07:02:49.007197 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jul 2 07:02:49.007204 kernel: pnp: PnP ACPI init Jul 2 07:02:49.007212 kernel: pnp: PnP ACPI: found 3 devices Jul 2 07:02:49.007224 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Jul 2 07:02:49.007231 kernel: NET: Registered PF_INET protocol family Jul 2 07:02:49.007242 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jul 2 07:02:49.007250 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) Jul 2 07:02:49.007261 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jul 2 07:02:49.007268 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) Jul 2 07:02:49.007277 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jul 2 07:02:49.007286 kernel: TCP: Hash tables configured (established 65536 bind 65536) Jul 2 07:02:49.007293 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:02:49.007306 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) Jul 2 07:02:49.007313 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jul 2 07:02:49.007321 kernel: NET: Registered PF_XDP protocol family Jul 2 07:02:49.007331 kernel: PCI: CLS 0 bytes, default 64 Jul 2 07:02:49.007338 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Jul 2 07:02:49.007347 kernel: software IO TLB: mapped [mem 0x000000003b5c8000-0x000000003f5c8000] (64MB) Jul 2 07:02:49.007356 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer Jul 2 07:02:49.007364 kernel: Initialise system trusted keyrings Jul 2 07:02:49.007374 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 Jul 2 07:02:49.007383 kernel: Key type asymmetric registered Jul 2 07:02:49.007391 kernel: Asymmetric key parser 'x509' registered Jul 2 07:02:49.007401 kernel: alg: self-tests for CTR-KDF (hmac(sha256)) passed Jul 2 07:02:49.007409 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jul 2 07:02:49.007417 kernel: io scheduler mq-deadline registered Jul 2 07:02:49.007427 kernel: io scheduler kyber registered Jul 2 07:02:49.007434 kernel: io scheduler bfq registered Jul 2 07:02:49.007443 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Jul 2 07:02:49.007452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jul 2 07:02:49.007461 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Jul 2 07:02:49.007471 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A Jul 2 07:02:49.007479 kernel: i8042: PNP: No PS/2 controller found. Jul 2 07:02:49.007603 kernel: rtc_cmos 00:02: registered as rtc0 Jul 2 07:02:49.007732 kernel: rtc_cmos 00:02: setting system clock to 2024-07-02T07:02:48 UTC (1719903768) Jul 2 07:02:49.007877 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram Jul 2 07:02:49.007894 kernel: fail to initialize ptp_kvm Jul 2 07:02:49.007912 kernel: intel_pstate: CPU model not supported Jul 2 07:02:49.007927 kernel: efifb: probing for efifb Jul 2 07:02:49.007941 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k Jul 2 07:02:49.007957 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 Jul 2 07:02:49.007973 kernel: efifb: scrolling: redraw Jul 2 07:02:49.007988 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 Jul 2 07:02:49.008003 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:02:49.008017 kernel: fb0: EFI VGA frame buffer device Jul 2 07:02:49.008033 kernel: pstore: Registered efi as persistent store backend Jul 2 07:02:49.008051 kernel: NET: Registered PF_INET6 protocol family Jul 2 07:02:49.008068 kernel: Segment Routing with IPv6 Jul 2 07:02:49.008083 kernel: In-situ OAM (IOAM) with IPv6 Jul 2 07:02:49.008097 kernel: NET: Registered PF_PACKET protocol family Jul 2 07:02:49.008111 kernel: Key type dns_resolver registered Jul 2 07:02:49.008125 kernel: IPI shorthand broadcast: enabled Jul 2 07:02:49.008138 kernel: sched_clock: Marking stable (847875500, 24955400)->(1069618800, -196787900) Jul 2 07:02:49.008152 kernel: registered taskstats version 1 Jul 2 07:02:49.008167 kernel: Loading compiled-in X.509 certificates Jul 2 07:02:49.008181 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.1.96-flatcar: ad4c54fcfdf0a10b17828c4377e868762dc43797' Jul 2 07:02:49.008199 kernel: Key type .fscrypt registered Jul 2 07:02:49.008214 kernel: Key type fscrypt-provisioning registered Jul 2 07:02:49.008229 kernel: pstore: Using crash dump compression: deflate Jul 2 07:02:49.008243 kernel: ima: No TPM chip found, activating TPM-bypass! Jul 2 07:02:49.008257 kernel: ima: Allocated hash algorithm: sha1 Jul 2 07:02:49.008271 kernel: ima: No architecture policies found Jul 2 07:02:49.008284 kernel: clk: Disabling unused clocks Jul 2 07:02:49.008298 kernel: Freeing unused kernel image (initmem) memory: 47156K Jul 2 07:02:49.008317 kernel: Write protecting the kernel read-only data: 34816k Jul 2 07:02:49.008331 kernel: Freeing unused kernel image (text/rodata gap) memory: 2040K Jul 2 07:02:49.008345 kernel: Freeing unused kernel image (rodata/data gap) memory: 488K Jul 2 07:02:49.008359 kernel: Run /init as init process Jul 2 07:02:49.008373 kernel: with arguments: Jul 2 07:02:49.008383 kernel: /init Jul 2 07:02:49.008408 kernel: with environment: Jul 2 07:02:49.008419 kernel: HOME=/ Jul 2 07:02:49.008432 kernel: TERM=linux Jul 2 07:02:49.008444 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Jul 2 07:02:49.008457 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:02:49.008472 systemd[1]: Detected virtualization microsoft. Jul 2 07:02:49.008480 systemd[1]: Detected architecture x86-64. Jul 2 07:02:49.008491 systemd[1]: Running in initrd. Jul 2 07:02:49.008499 systemd[1]: No hostname configured, using default hostname. Jul 2 07:02:49.008508 systemd[1]: Hostname set to . Jul 2 07:02:49.008519 systemd[1]: Initializing machine ID from random generator. Jul 2 07:02:49.008530 systemd[1]: Queued start job for default target initrd.target. Jul 2 07:02:49.008540 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:02:49.008550 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:02:49.008558 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:02:49.008567 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:02:49.008576 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:02:49.008583 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:02:49.008597 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:02:49.008605 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:02:49.008615 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jul 2 07:02:49.008624 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jul 2 07:02:49.008632 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jul 2 07:02:49.008642 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:02:49.008650 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:02:49.008660 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:02:49.008672 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:02:49.008682 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:02:49.008704 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jul 2 07:02:49.008713 systemd[1]: Starting systemd-fsck-usr.service... Jul 2 07:02:49.008721 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:02:49.008732 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:02:49.008740 systemd[1]: Starting systemd-vconsole-setup.service - Setup Virtual Console... Jul 2 07:02:49.008749 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:02:49.008759 kernel: audit: type=1130 audit(1719903768.997:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.008769 systemd[1]: Finished systemd-fsck-usr.service. Jul 2 07:02:49.008784 systemd-journald[178]: Journal started Jul 2 07:02:49.008828 systemd-journald[178]: Runtime Journal (/run/log/journal/f3383a8cb34847fcb7feaec05c9c6117) is 8.0M, max 158.8M, 150.8M free. Jul 2 07:02:48.997000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.016698 kernel: audit: type=1130 audit(1719903769.009:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.020910 systemd-modules-load[179]: Inserted module 'overlay' Jul 2 07:02:49.023614 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:02:49.026653 systemd[1]: Finished systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:02:49.025000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.048075 kernel: audit: type=1130 audit(1719903769.025:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.048124 kernel: audit: type=1130 audit(1719903769.028:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.048863 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jul 2 07:02:49.052489 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jul 2 07:02:49.063210 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:02:49.073877 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jul 2 07:02:49.077117 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:02:49.091995 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jul 2 07:02:49.092031 kernel: audit: type=1130 audit(1719903769.076:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.101410 kernel: Bridge firewalling registered Jul 2 07:02:49.101440 kernel: audit: type=1130 audit(1719903769.076:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.076000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.099137 systemd-modules-load[179]: Inserted module 'br_netfilter' Jul 2 07:02:49.105870 kernel: audit: type=1334 audit(1719903769.077:8): prog-id=6 op=LOAD Jul 2 07:02:49.077000 audit: BPF prog-id=6 op=LOAD Jul 2 07:02:49.104770 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:02:49.112479 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:02:49.116786 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jul 2 07:02:49.114000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.133739 kernel: audit: type=1130 audit(1719903769.114:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.156704 kernel: SCSI subsystem initialized Jul 2 07:02:49.158119 dracut-cmdline[201]: dracut-dracut-053 Jul 2 07:02:49.164832 dracut-cmdline[201]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=5c215d2523556d4992ba36684815e8e6fad1d468795f4ed0868a855d0b76a607 Jul 2 07:02:49.185700 systemd-resolved[195]: Positive Trust Anchors: Jul 2 07:02:49.195648 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jul 2 07:02:49.195681 kernel: device-mapper: uevent: version 1.0.3 Jul 2 07:02:49.195714 kernel: device-mapper: ioctl: 4.47.0-ioctl (2022-07-28) initialised: dm-devel@redhat.com Jul 2 07:02:49.185711 systemd-resolved[195]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:02:49.185745 systemd-resolved[195]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:02:49.189143 systemd-resolved[195]: Defaulting to hostname 'linux'. Jul 2 07:02:49.196490 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:02:49.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.222087 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:02:49.233483 kernel: audit: type=1130 audit(1719903769.221:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.237750 systemd-modules-load[179]: Inserted module 'dm_multipath' Jul 2 07:02:49.238755 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:02:49.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.247882 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:02:49.265000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.262647 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:02:49.303709 kernel: Loading iSCSI transport class v2.0-870. Jul 2 07:02:49.316709 kernel: iscsi: registered transport (tcp) Jul 2 07:02:49.340225 kernel: iscsi: registered transport (qla4xxx) Jul 2 07:02:49.340280 kernel: QLogic iSCSI HBA Driver Jul 2 07:02:49.373910 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jul 2 07:02:49.379000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.385872 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jul 2 07:02:49.451720 kernel: raid6: avx512x4 gen() 18199 MB/s Jul 2 07:02:49.470703 kernel: raid6: avx512x2 gen() 18235 MB/s Jul 2 07:02:49.489700 kernel: raid6: avx512x1 gen() 18310 MB/s Jul 2 07:02:49.508703 kernel: raid6: avx2x4 gen() 18145 MB/s Jul 2 07:02:49.527700 kernel: raid6: avx2x2 gen() 18292 MB/s Jul 2 07:02:49.547930 kernel: raid6: avx2x1 gen() 13954 MB/s Jul 2 07:02:49.547956 kernel: raid6: using algorithm avx512x1 gen() 18310 MB/s Jul 2 07:02:49.569025 kernel: raid6: .... xor() 26645 MB/s, rmw enabled Jul 2 07:02:49.569060 kernel: raid6: using avx512x2 recovery algorithm Jul 2 07:02:49.574716 kernel: xor: automatically using best checksumming function avx Jul 2 07:02:49.714720 kernel: Btrfs loaded, crc32c=crc32c-intel, zoned=no, fsverity=no Jul 2 07:02:49.723634 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:02:49.726000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.726000 audit: BPF prog-id=7 op=LOAD Jul 2 07:02:49.726000 audit: BPF prog-id=8 op=LOAD Jul 2 07:02:49.730912 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:02:49.757115 systemd-udevd[383]: Using default interface naming scheme 'v252'. Jul 2 07:02:49.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.761738 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:02:49.766058 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jul 2 07:02:49.789011 dracut-pre-trigger[398]: rd.md=0: removing MD RAID activation Jul 2 07:02:49.818293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:02:49.821000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.826915 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:02:49.862231 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:02:49.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:49.910706 kernel: cryptd: max_cpu_qlen set to 1000 Jul 2 07:02:49.927707 kernel: hv_vmbus: Vmbus version:5.2 Jul 2 07:02:49.949718 kernel: hv_vmbus: registering driver hyperv_keyboard Jul 2 07:02:49.952896 kernel: hid: raw HID events driver (C) Jiri Kosina Jul 2 07:02:49.959805 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 Jul 2 07:02:49.962715 kernel: hv_vmbus: registering driver hid_hyperv Jul 2 07:02:49.962748 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 Jul 2 07:02:49.962765 kernel: hid 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on Jul 2 07:02:49.967706 kernel: AVX2 version of gcm_enc/dec engaged. Jul 2 07:02:49.967739 kernel: AES CTR mode by8 optimization enabled Jul 2 07:02:49.974719 kernel: hv_vmbus: registering driver hv_storvsc Jul 2 07:02:49.984841 kernel: hv_vmbus: registering driver hv_netvsc Jul 2 07:02:49.984869 kernel: scsi host1: storvsc_host_t Jul 2 07:02:49.996284 kernel: scsi host0: storvsc_host_t Jul 2 07:02:50.001787 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 Jul 2 07:02:50.007709 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 Jul 2 07:02:50.025490 kernel: sr 0:0:0:2: [sr0] scsi-1 drive Jul 2 07:02:50.035126 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jul 2 07:02:50.035148 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 Jul 2 07:02:50.045704 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) Jul 2 07:02:50.058666 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks Jul 2 07:02:50.058885 kernel: sd 0:0:0:0: [sda] Write Protect is off Jul 2 07:02:50.059057 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 Jul 2 07:02:50.059217 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA Jul 2 07:02:50.059380 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:02:50.059397 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Jul 2 07:02:50.167034 kernel: hv_netvsc 000d3a69-55c9-000d-3a69-55c9000d3a69 eth0: VF slot 1 added Jul 2 07:02:50.176723 kernel: hv_vmbus: registering driver hv_pci Jul 2 07:02:50.176775 kernel: hv_pci 8ee59cac-5551-454c-925c-ea77c8ba4c02: PCI VMBus probing: Using version 0x10004 Jul 2 07:02:50.223375 kernel: hv_pci 8ee59cac-5551-454c-925c-ea77c8ba4c02: PCI host bridge to bus 5551:00 Jul 2 07:02:50.223543 kernel: pci_bus 5551:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] Jul 2 07:02:50.223730 kernel: pci_bus 5551:00: No busn resource found for root bus, will use [bus 00-ff] Jul 2 07:02:50.223893 kernel: pci 5551:00:02.0: [15b3:1016] type 00 class 0x020000 Jul 2 07:02:50.224077 kernel: pci 5551:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:02:50.224245 kernel: pci 5551:00:02.0: enabling Extended Tags Jul 2 07:02:50.224407 kernel: pci 5551:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at 5551:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) Jul 2 07:02:50.224560 kernel: pci_bus 5551:00: busn_res: [bus 00-ff] end is updated to 00 Jul 2 07:02:50.224721 kernel: pci 5551:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] Jul 2 07:02:50.388225 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. Jul 2 07:02:50.403254 kernel: mlx5_core 5551:00:02.0: enabling device (0000 -> 0002) Jul 2 07:02:50.671052 kernel: mlx5_core 5551:00:02.0: firmware version: 14.30.1284 Jul 2 07:02:50.671245 kernel: BTRFS: device label OEM devid 1 transid 13 /dev/sda6 scanned by (udev-worker) (439) Jul 2 07:02:50.671267 kernel: BTRFS: device fsid 1fca1e64-eeea-4360-9664-a9b6b3a60b6f devid 1 transid 35 /dev/sda3 scanned by (udev-worker) (447) Jul 2 07:02:50.671285 kernel: mlx5_core 5551:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0) Jul 2 07:02:50.671448 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:02:50.671467 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:02:50.671489 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:02:50.671505 kernel: mlx5_core 5551:00:02.0: Supported tc offload range - chains: 1, prios: 1 Jul 2 07:02:50.671663 kernel: hv_netvsc 000d3a69-55c9-000d-3a69-55c9000d3a69 eth0: VF registering: eth1 Jul 2 07:02:50.671831 kernel: mlx5_core 5551:00:02.0 eth1: joined to eth0 Jul 2 07:02:50.454602 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 07:02:50.564031 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. Jul 2 07:02:50.575392 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. Jul 2 07:02:50.581570 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. Jul 2 07:02:50.588869 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jul 2 07:02:50.714710 kernel: mlx5_core 5551:00:02.0 enP21841s1: renamed from eth1 Jul 2 07:02:51.614736 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jul 2 07:02:51.615173 disk-uuid[573]: The operation has completed successfully. Jul 2 07:02:51.693081 systemd[1]: disk-uuid.service: Deactivated successfully. Jul 2 07:02:51.696000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:51.696000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:51.693189 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jul 2 07:02:51.707055 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jul 2 07:02:51.712266 sh[687]: Success Jul 2 07:02:51.741709 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" Jul 2 07:02:51.946125 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jul 2 07:02:51.955102 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jul 2 07:02:51.959524 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jul 2 07:02:51.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:51.974705 kernel: BTRFS info (device dm-0): first mount of filesystem 1fca1e64-eeea-4360-9664-a9b6b3a60b6f Jul 2 07:02:51.974744 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:02:51.980572 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jul 2 07:02:51.983283 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jul 2 07:02:51.985682 kernel: BTRFS info (device dm-0): using free space tree Jul 2 07:02:52.292843 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jul 2 07:02:52.298032 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jul 2 07:02:52.309879 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jul 2 07:02:52.313585 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jul 2 07:02:52.333199 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:02:52.333251 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:02:52.335918 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:02:52.369870 systemd[1]: mnt-oem.mount: Deactivated successfully. Jul 2 07:02:52.376712 kernel: BTRFS info (device sda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:02:52.386000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.383588 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jul 2 07:02:52.390890 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jul 2 07:02:52.399925 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:02:52.402000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.402000 audit: BPF prog-id=9 op=LOAD Jul 2 07:02:52.404145 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:02:52.429716 systemd-networkd[869]: lo: Link UP Jul 2 07:02:52.431000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.429722 systemd-networkd[869]: lo: Gained carrier Jul 2 07:02:52.430238 systemd-networkd[869]: Enumeration completed Jul 2 07:02:52.430760 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:02:52.432155 systemd[1]: Reached target network.target - Network. Jul 2 07:02:52.440707 systemd[1]: Starting iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:02:52.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.445589 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:02:52.445593 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:02:52.448921 systemd[1]: Started iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:02:52.454098 systemd[1]: Starting iscsid.service - Open-iSCSI... Jul 2 07:02:52.466155 iscsid[874]: iscsid: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:02:52.466155 iscsid[874]: iscsid: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log Jul 2 07:02:52.466155 iscsid[874]: into or discover targets. Please create a file /etc/iscsi/initiatorname.iscsi that contains a sting with the format: InitiatorName=iqn.yyyy-mm.[:identifier]. Jul 2 07:02:52.466155 iscsid[874]: Example: InitiatorName=iqn.2001-04.com.redhat:fc6. Jul 2 07:02:52.466155 iscsid[874]: If using hardware iscsi like qla4xxx this message can be ignored. Jul 2 07:02:52.466155 iscsid[874]: iscsid: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi Jul 2 07:02:52.466155 iscsid[874]: iscsid: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf Jul 2 07:02:52.464783 systemd[1]: Started iscsid.service - Open-iSCSI. Jul 2 07:02:52.500000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.509135 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jul 2 07:02:52.520304 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jul 2 07:02:52.523385 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:02:52.523455 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:02:52.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.536866 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:02:52.545998 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jul 2 07:02:52.551647 kernel: mlx5_core 5551:00:02.0 enP21841s1: Link up Jul 2 07:02:52.555494 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:02:52.557000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:52.579008 kernel: hv_netvsc 000d3a69-55c9-000d-3a69-55c9000d3a69 eth0: Data path switched to VF: enP21841s1 Jul 2 07:02:52.579581 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:02:52.579726 systemd-networkd[869]: enP21841s1: Link UP Jul 2 07:02:52.581661 systemd-networkd[869]: eth0: Link UP Jul 2 07:02:52.583635 systemd-networkd[869]: eth0: Gained carrier Jul 2 07:02:52.583733 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:02:52.590850 systemd-networkd[869]: enP21841s1: Gained carrier Jul 2 07:02:52.626767 systemd-networkd[869]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:02:53.123718 ignition[858]: Ignition 2.15.0 Jul 2 07:02:53.123734 ignition[858]: Stage: fetch-offline Jul 2 07:02:53.128000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.125168 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:02:53.123779 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:53.132943 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jul 2 07:02:53.123791 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:53.123897 ignition[858]: parsed url from cmdline: "" Jul 2 07:02:53.123902 ignition[858]: no config URL provided Jul 2 07:02:53.123909 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:02:53.123919 ignition[858]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:02:53.123926 ignition[858]: failed to fetch config: resource requires networking Jul 2 07:02:53.124323 ignition[858]: Ignition finished successfully Jul 2 07:02:53.166280 ignition[893]: Ignition 2.15.0 Jul 2 07:02:53.166292 ignition[893]: Stage: fetch Jul 2 07:02:53.166404 ignition[893]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:53.166419 ignition[893]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:53.166521 ignition[893]: parsed url from cmdline: "" Jul 2 07:02:53.166525 ignition[893]: no config URL provided Jul 2 07:02:53.166532 ignition[893]: reading system config file "/usr/lib/ignition/user.ign" Jul 2 07:02:53.166542 ignition[893]: no config at "/usr/lib/ignition/user.ign" Jul 2 07:02:53.166565 ignition[893]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 Jul 2 07:02:53.267974 ignition[893]: GET result: OK Jul 2 07:02:53.268277 ignition[893]: config has been read from IMDS userdata Jul 2 07:02:53.268304 ignition[893]: parsing config with SHA512: ec5a6505ba242320a8bd451ab7ba2ac60085fefdd6f73ab44e4719dc7f2e8bfd5a3177ea0a84bd541012400b5ba791902f582e6e7580191ba206f8d3f7db8fc9 Jul 2 07:02:53.273712 unknown[893]: fetched base config from "system" Jul 2 07:02:53.275739 unknown[893]: fetched base config from "system" Jul 2 07:02:53.275762 unknown[893]: fetched user config from "azure" Jul 2 07:02:53.276745 ignition[893]: fetch: fetch complete Jul 2 07:02:53.276752 ignition[893]: fetch: fetch passed Jul 2 07:02:53.276821 ignition[893]: Ignition finished successfully Jul 2 07:02:53.287015 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jul 2 07:02:53.300066 kernel: kauditd_printk_skb: 21 callbacks suppressed Jul 2 07:02:53.300106 kernel: audit: type=1130 audit(1719903773.289:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.289000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.306884 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jul 2 07:02:53.325818 ignition[899]: Ignition 2.15.0 Jul 2 07:02:53.325882 ignition[899]: Stage: kargs Jul 2 07:02:53.326013 ignition[899]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:53.326026 ignition[899]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:53.334970 ignition[899]: kargs: kargs passed Jul 2 07:02:53.336655 ignition[899]: Ignition finished successfully Jul 2 07:02:53.339075 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jul 2 07:02:53.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.350708 kernel: audit: type=1130 audit(1719903773.340:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.356910 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jul 2 07:02:53.371955 ignition[905]: Ignition 2.15.0 Jul 2 07:02:53.373820 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jul 2 07:02:53.371965 ignition[905]: Stage: disks Jul 2 07:02:53.372092 ignition[905]: no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:53.372107 ignition[905]: no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:53.373054 ignition[905]: disks: disks passed Jul 2 07:02:53.373101 ignition[905]: Ignition finished successfully Jul 2 07:02:53.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.387990 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jul 2 07:02:53.411755 kernel: audit: type=1130 audit(1719903773.387:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.396188 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:02:53.396218 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:02:53.396249 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:02:53.396710 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:02:53.417849 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jul 2 07:02:53.466778 systemd-fsck[913]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks Jul 2 07:02:53.471281 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jul 2 07:02:53.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.486952 kernel: audit: type=1130 audit(1719903773.474:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:53.488857 systemd[1]: Mounting sysroot.mount - /sysroot... Jul 2 07:02:53.572786 kernel: EXT4-fs (sda9): mounted filesystem with ordered data mode. Quota mode: none. Jul 2 07:02:53.573389 systemd[1]: Mounted sysroot.mount - /sysroot. Jul 2 07:02:53.575650 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jul 2 07:02:53.605794 systemd-networkd[869]: eth0: Gained IPv6LL Jul 2 07:02:53.613984 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:02:53.621703 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jul 2 07:02:53.627955 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jul 2 07:02:53.646725 kernel: BTRFS: device label OEM devid 1 transid 14 /dev/sda6 scanned by mount (922) Jul 2 07:02:53.646756 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:02:53.646774 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:02:53.646791 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:02:53.633464 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jul 2 07:02:53.633504 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:02:53.656855 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:02:53.661587 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jul 2 07:02:53.671018 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jul 2 07:02:54.315159 coreos-metadata[924]: Jul 02 07:02:54.315 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 07:02:54.322658 coreos-metadata[924]: Jul 02 07:02:54.322 INFO Fetch successful Jul 2 07:02:54.322658 coreos-metadata[924]: Jul 02 07:02:54.322 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 Jul 2 07:02:54.343797 coreos-metadata[924]: Jul 02 07:02:54.343 INFO Fetch successful Jul 2 07:02:54.356099 coreos-metadata[924]: Jul 02 07:02:54.356 INFO wrote hostname ci-3815.2.5-a-54ab6c74aa to /sysroot/etc/hostname Jul 2 07:02:54.360680 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 07:02:54.376668 kernel: audit: type=1130 audit(1719903774.362:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:54.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:54.380825 initrd-setup-root[950]: cut: /sysroot/etc/passwd: No such file or directory Jul 2 07:02:54.411380 initrd-setup-root[957]: cut: /sysroot/etc/group: No such file or directory Jul 2 07:02:54.416524 initrd-setup-root[964]: cut: /sysroot/etc/shadow: No such file or directory Jul 2 07:02:54.433385 initrd-setup-root[971]: cut: /sysroot/etc/gshadow: No such file or directory Jul 2 07:02:55.158956 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jul 2 07:02:55.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.171743 kernel: audit: type=1130 audit(1719903775.161:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.173892 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jul 2 07:02:55.177658 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jul 2 07:02:55.187771 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jul 2 07:02:55.193054 kernel: BTRFS info (device sda6): last unmount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:02:55.214309 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jul 2 07:02:55.218000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.227150 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jul 2 07:02:55.232754 kernel: audit: type=1130 audit(1719903775.218:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.232786 ignition[1038]: INFO : Ignition 2.15.0 Jul 2 07:02:55.232786 ignition[1038]: INFO : Stage: mount Jul 2 07:02:55.232786 ignition[1038]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:55.232786 ignition[1038]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:55.232786 ignition[1038]: INFO : mount: mount passed Jul 2 07:02:55.232786 ignition[1038]: INFO : Ignition finished successfully Jul 2 07:02:55.252569 kernel: audit: type=1130 audit(1719903775.232:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.232000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:55.259852 systemd[1]: Starting ignition-files.service - Ignition (files)... Jul 2 07:02:55.267925 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jul 2 07:02:55.279704 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by mount (1047) Jul 2 07:02:55.283702 kernel: BTRFS info (device sda6): first mount of filesystem f7c77bfb-d479-47f3-a34e-515c95184b74 Jul 2 07:02:55.283736 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Jul 2 07:02:55.288180 kernel: BTRFS info (device sda6): using free space tree Jul 2 07:02:55.291939 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jul 2 07:02:55.310403 ignition[1065]: INFO : Ignition 2.15.0 Jul 2 07:02:55.312669 ignition[1065]: INFO : Stage: files Jul 2 07:02:55.312669 ignition[1065]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:55.312669 ignition[1065]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:55.312669 ignition[1065]: DEBUG : files: compiled without relabeling support, skipping Jul 2 07:02:55.312669 ignition[1065]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jul 2 07:02:55.312669 ignition[1065]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jul 2 07:02:55.393432 ignition[1065]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jul 2 07:02:55.398045 ignition[1065]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jul 2 07:02:55.398045 ignition[1065]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jul 2 07:02:55.394051 unknown[1065]: wrote ssh authorized keys file for user: core Jul 2 07:02:55.429602 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:02:55.435103 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Jul 2 07:02:55.494652 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jul 2 07:02:55.603368 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Jul 2 07:02:55.609587 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jul 2 07:02:55.609587 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jul 2 07:02:55.609587 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:02:55.623044 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jul 2 07:02:55.623044 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:02:55.631861 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jul 2 07:02:55.636378 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:02:55.640918 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:02:55.645793 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.29.2-x86-64.raw: attempt #1 Jul 2 07:02:56.227981 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jul 2 07:02:56.629548 ignition[1065]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.29.2-x86-64.raw" Jul 2 07:02:56.629548 ignition[1065]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jul 2 07:02:56.643085 ignition[1065]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:02:56.648728 ignition[1065]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jul 2 07:02:56.648728 ignition[1065]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jul 2 07:02:56.665662 kernel: audit: type=1130 audit(1719903776.655:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.665751 ignition[1065]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jul 2 07:02:56.665751 ignition[1065]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jul 2 07:02:56.665751 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:02:56.665751 ignition[1065]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jul 2 07:02:56.665751 ignition[1065]: INFO : files: files passed Jul 2 07:02:56.665751 ignition[1065]: INFO : Ignition finished successfully Jul 2 07:02:56.650273 systemd[1]: Finished ignition-files.service - Ignition (files). Jul 2 07:02:56.676825 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jul 2 07:02:56.694737 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jul 2 07:02:56.710406 kernel: audit: type=1130 audit(1719903776.700:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.700000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.697800 systemd[1]: ignition-quench.service: Deactivated successfully. Jul 2 07:02:56.697918 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jul 2 07:02:56.721758 initrd-setup-root-after-ignition[1091]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:02:56.725843 initrd-setup-root-after-ignition[1091]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:02:56.729897 initrd-setup-root-after-ignition[1095]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jul 2 07:02:56.734395 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:02:56.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.737180 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jul 2 07:02:56.751875 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jul 2 07:02:56.776313 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jul 2 07:02:56.776422 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jul 2 07:02:56.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.781000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.782158 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jul 2 07:02:56.788216 systemd[1]: Reached target initrd.target - Initrd Default Target. Jul 2 07:02:56.796511 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jul 2 07:02:56.809072 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jul 2 07:02:56.822860 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:02:56.825000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.831993 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jul 2 07:02:56.844527 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:02:56.850497 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:02:56.856116 systemd[1]: Stopped target timers.target - Timer Units. Jul 2 07:02:56.860971 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jul 2 07:02:56.861142 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jul 2 07:02:56.869000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.870030 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jul 2 07:02:56.875189 systemd[1]: Stopped target basic.target - Basic System. Jul 2 07:02:56.880055 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jul 2 07:02:56.885597 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jul 2 07:02:56.891384 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jul 2 07:02:56.896930 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jul 2 07:02:56.899544 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jul 2 07:02:56.904796 systemd[1]: Stopped target sysinit.target - System Initialization. Jul 2 07:02:56.910409 systemd[1]: Stopped target local-fs.target - Local File Systems. Jul 2 07:02:56.918206 systemd[1]: Stopped target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:02:56.921119 systemd[1]: Stopped target swap.target - Swaps. Jul 2 07:02:56.928156 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jul 2 07:02:56.928327 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jul 2 07:02:56.935000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.936314 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:02:56.941794 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jul 2 07:02:56.944529 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jul 2 07:02:56.946000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.947510 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jul 2 07:02:56.947637 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jul 2 07:02:56.955000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.959114 systemd[1]: ignition-files.service: Deactivated successfully. Jul 2 07:02:56.959287 systemd[1]: Stopped ignition-files.service - Ignition (files). Jul 2 07:02:56.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.966597 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jul 2 07:02:56.969327 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jul 2 07:02:56.971000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:56.981383 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jul 2 07:02:56.987843 systemd[1]: Stopping iscsiuio.service - iSCSI UserSpace I/O driver... Jul 2 07:02:56.990309 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jul 2 07:02:56.990502 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:02:57.001000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.004261 ignition[1109]: INFO : Ignition 2.15.0 Jul 2 07:02:57.004261 ignition[1109]: INFO : Stage: umount Jul 2 07:02:57.010738 ignition[1109]: INFO : no configs at "/usr/lib/ignition/base.d" Jul 2 07:02:57.010738 ignition[1109]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" Jul 2 07:02:57.010738 ignition[1109]: INFO : umount: umount passed Jul 2 07:02:57.010738 ignition[1109]: INFO : Ignition finished successfully Jul 2 07:02:57.020000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.012798 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jul 2 07:02:57.015496 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jul 2 07:02:57.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.015651 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:02:57.021509 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jul 2 07:02:57.021618 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jul 2 07:02:57.032670 systemd[1]: iscsiuio.service: Deactivated successfully. Jul 2 07:02:57.032963 systemd[1]: Stopped iscsiuio.service - iSCSI UserSpace I/O driver. Jul 2 07:02:57.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=iscsiuio comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.051083 systemd[1]: ignition-mount.service: Deactivated successfully. Jul 2 07:02:57.051200 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jul 2 07:02:57.055000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.056672 systemd[1]: ignition-disks.service: Deactivated successfully. Jul 2 07:02:57.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.065000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.056783 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jul 2 07:02:57.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.061259 systemd[1]: ignition-kargs.service: Deactivated successfully. Jul 2 07:02:57.061303 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jul 2 07:02:57.080000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.066237 systemd[1]: ignition-fetch.service: Deactivated successfully. Jul 2 07:02:57.066286 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jul 2 07:02:57.068813 systemd[1]: Stopped target network.target - Network. Jul 2 07:02:57.073739 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jul 2 07:02:57.073791 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jul 2 07:02:57.081314 systemd[1]: Stopped target paths.target - Path Units. Jul 2 07:02:57.083586 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jul 2 07:02:57.086126 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:02:57.106711 systemd[1]: Stopped target slices.target - Slice Units. Jul 2 07:02:57.108989 systemd[1]: Stopped target sockets.target - Socket Units. Jul 2 07:02:57.115906 systemd[1]: iscsid.socket: Deactivated successfully. Jul 2 07:02:57.115948 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jul 2 07:02:57.122767 systemd[1]: iscsiuio.socket: Deactivated successfully. Jul 2 07:02:57.122819 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jul 2 07:02:57.130145 systemd[1]: ignition-setup.service: Deactivated successfully. Jul 2 07:02:57.134000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.130206 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jul 2 07:02:57.135580 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jul 2 07:02:57.143434 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jul 2 07:02:57.150365 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jul 2 07:02:57.150726 systemd-networkd[869]: eth0: DHCPv6 lease lost Jul 2 07:02:57.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.151704 systemd[1]: systemd-resolved.service: Deactivated successfully. Jul 2 07:02:57.161000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.151811 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jul 2 07:02:57.167000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.167000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.157774 systemd[1]: systemd-networkd.service: Deactivated successfully. Jul 2 07:02:57.172000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.157866 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jul 2 07:02:57.173000 audit: BPF prog-id=6 op=UNLOAD Jul 2 07:02:57.173000 audit: BPF prog-id=9 op=UNLOAD Jul 2 07:02:57.163012 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jul 2 07:02:57.163092 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jul 2 07:02:57.168842 systemd[1]: sysroot-boot.service: Deactivated successfully. Jul 2 07:02:57.168920 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jul 2 07:02:57.174566 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jul 2 07:02:57.174598 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:02:57.179324 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jul 2 07:02:57.179380 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jul 2 07:02:57.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.210851 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jul 2 07:02:57.212000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.213134 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jul 2 07:02:57.213196 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jul 2 07:02:57.213407 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jul 2 07:02:57.213443 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:02:57.230000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.231559 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jul 2 07:02:57.231616 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jul 2 07:02:57.239000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.240089 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jul 2 07:02:57.240142 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:02:57.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.248914 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:02:57.252450 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jul 2 07:02:57.252527 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jul 2 07:02:57.264269 systemd[1]: systemd-udevd.service: Deactivated successfully. Jul 2 07:02:57.264436 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:02:57.266000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.274000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.267644 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jul 2 07:02:57.267683 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jul 2 07:02:57.277000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.273262 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jul 2 07:02:57.273301 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:02:57.274303 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jul 2 07:02:57.274342 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jul 2 07:02:57.274767 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jul 2 07:02:57.274801 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jul 2 07:02:57.275169 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jul 2 07:02:57.275201 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jul 2 07:02:57.276372 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jul 2 07:02:57.276543 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jul 2 07:02:57.276588 systemd[1]: Stopped systemd-vconsole-setup.service - Setup Virtual Console. Jul 2 07:02:57.285171 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jul 2 07:02:57.285261 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jul 2 07:02:57.334045 kernel: hv_netvsc 000d3a69-55c9-000d-3a69-55c9000d3a69 eth0: Data path switched from VF: enP21841s1 Jul 2 07:02:57.334000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.334000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.348919 systemd[1]: network-cleanup.service: Deactivated successfully. Jul 2 07:02:57.349030 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jul 2 07:02:57.353000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:02:57.354311 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jul 2 07:02:57.362910 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jul 2 07:02:57.370104 systemd[1]: Switching root. Jul 2 07:02:57.391775 iscsid[874]: iscsid shutting down. Jul 2 07:02:57.393865 systemd-journald[178]: Received SIGTERM from PID 1 (systemd). Jul 2 07:02:57.393933 systemd-journald[178]: Journal stopped Jul 2 07:03:01.304474 kernel: SELinux: Permission cmd in class io_uring not defined in policy. Jul 2 07:03:01.304508 kernel: SELinux: the above unknown classes and permissions will be allowed Jul 2 07:03:01.304525 kernel: SELinux: policy capability network_peer_controls=1 Jul 2 07:03:01.304538 kernel: SELinux: policy capability open_perms=1 Jul 2 07:03:01.304552 kernel: SELinux: policy capability extended_socket_class=1 Jul 2 07:03:01.304564 kernel: SELinux: policy capability always_check_network=0 Jul 2 07:03:01.304581 kernel: SELinux: policy capability cgroup_seclabel=1 Jul 2 07:03:01.304598 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jul 2 07:03:01.304612 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jul 2 07:03:01.304626 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jul 2 07:03:01.304640 kernel: kauditd_printk_skb: 41 callbacks suppressed Jul 2 07:03:01.304654 kernel: audit: type=1403 audit(1719903778.444:83): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:03:01.304672 systemd[1]: Successfully loaded SELinux policy in 244.928ms. Jul 2 07:03:01.304707 systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 9.364ms. Jul 2 07:03:01.304730 systemd[1]: systemd 252 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) Jul 2 07:03:01.304746 systemd[1]: Detected virtualization microsoft. Jul 2 07:03:01.304762 systemd[1]: Detected architecture x86-64. Jul 2 07:03:01.304777 systemd[1]: Detected first boot. Jul 2 07:03:01.304793 systemd[1]: Hostname set to . Jul 2 07:03:01.304812 systemd[1]: Initializing machine ID from random generator. Jul 2 07:03:01.304828 kernel: audit: type=1334 audit(1719903778.912:84): prog-id=10 op=LOAD Jul 2 07:03:01.304842 kernel: audit: type=1334 audit(1719903778.912:85): prog-id=10 op=UNLOAD Jul 2 07:03:01.304858 kernel: audit: type=1334 audit(1719903778.912:86): prog-id=11 op=LOAD Jul 2 07:03:01.304872 kernel: audit: type=1334 audit(1719903778.912:87): prog-id=11 op=UNLOAD Jul 2 07:03:01.304887 systemd[1]: Populated /etc with preset unit settings. Jul 2 07:03:01.304903 kernel: audit: type=1334 audit(1719903780.837:88): prog-id=12 op=LOAD Jul 2 07:03:01.304918 systemd[1]: iscsid.service: Deactivated successfully. Jul 2 07:03:01.304936 kernel: audit: type=1334 audit(1719903780.837:89): prog-id=3 op=UNLOAD Jul 2 07:03:01.304952 systemd[1]: Stopped iscsid.service - Open-iSCSI. Jul 2 07:03:01.304967 kernel: audit: type=1334 audit(1719903780.837:90): prog-id=13 op=LOAD Jul 2 07:03:01.304982 kernel: audit: type=1334 audit(1719903780.837:91): prog-id=14 op=LOAD Jul 2 07:03:01.304996 kernel: audit: type=1334 audit(1719903780.837:92): prog-id=4 op=UNLOAD Jul 2 07:03:01.305012 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jul 2 07:03:01.305027 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jul 2 07:03:01.305048 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jul 2 07:03:01.305067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jul 2 07:03:01.305081 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jul 2 07:03:01.308081 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jul 2 07:03:01.308107 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jul 2 07:03:01.308125 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jul 2 07:03:01.308149 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jul 2 07:03:01.308185 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jul 2 07:03:01.308200 systemd[1]: Created slice user.slice - User and Session Slice. Jul 2 07:03:01.308219 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jul 2 07:03:01.308234 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jul 2 07:03:01.308249 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jul 2 07:03:01.308264 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jul 2 07:03:01.308279 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jul 2 07:03:01.308295 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jul 2 07:03:01.308313 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jul 2 07:03:01.308328 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jul 2 07:03:01.308350 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jul 2 07:03:01.308365 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jul 2 07:03:01.308379 systemd[1]: Reached target slices.target - Slice Units. Jul 2 07:03:01.308394 systemd[1]: Reached target swap.target - Swaps. Jul 2 07:03:01.308407 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jul 2 07:03:01.308420 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jul 2 07:03:01.308431 systemd[1]: Listening on systemd-initctl.socket - initctl Compatibility Named Pipe. Jul 2 07:03:01.308446 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jul 2 07:03:01.308458 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jul 2 07:03:01.308469 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jul 2 07:03:01.308482 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jul 2 07:03:01.308495 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jul 2 07:03:01.308508 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jul 2 07:03:01.308521 systemd[1]: Mounting media.mount - External Media Directory... Jul 2 07:03:01.308537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:01.308548 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jul 2 07:03:01.308561 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jul 2 07:03:01.308572 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jul 2 07:03:01.308584 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jul 2 07:03:01.308597 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:03:01.308610 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jul 2 07:03:01.308622 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jul 2 07:03:01.308634 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:03:01.308645 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:03:01.308660 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:03:01.308670 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jul 2 07:03:01.308683 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:03:01.308730 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jul 2 07:03:01.308746 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jul 2 07:03:01.308759 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jul 2 07:03:01.308771 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jul 2 07:03:01.308782 systemd[1]: Stopped systemd-fsck-usr.service. Jul 2 07:03:01.308795 systemd[1]: Stopped systemd-journald.service - Journal Service. Jul 2 07:03:01.308806 systemd[1]: Starting systemd-journald.service - Journal Service... Jul 2 07:03:01.308819 kernel: fuse: init (API version 7.37) Jul 2 07:03:01.308830 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jul 2 07:03:01.308842 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jul 2 07:03:01.308857 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jul 2 07:03:01.308868 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jul 2 07:03:01.308880 systemd[1]: verity-setup.service: Deactivated successfully. Jul 2 07:03:01.308891 systemd[1]: Stopped verity-setup.service. Jul 2 07:03:01.308905 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:01.308918 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jul 2 07:03:01.308928 kernel: loop: module loaded Jul 2 07:03:01.308941 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jul 2 07:03:01.308951 systemd[1]: Mounted media.mount - External Media Directory. Jul 2 07:03:01.308971 systemd-journald[1237]: Journal started Jul 2 07:03:01.309020 systemd-journald[1237]: Runtime Journal (/run/log/journal/3193ebcca13e4b3da246de91ea6a73a7) is 8.0M, max 158.8M, 150.8M free. Jul 2 07:02:58.444000 audit: MAC_POLICY_LOAD auid=4294967295 ses=4294967295 lsm=selinux res=1 Jul 2 07:02:58.912000 audit: BPF prog-id=10 op=LOAD Jul 2 07:02:58.912000 audit: BPF prog-id=10 op=UNLOAD Jul 2 07:02:58.912000 audit: BPF prog-id=11 op=LOAD Jul 2 07:02:58.912000 audit: BPF prog-id=11 op=UNLOAD Jul 2 07:03:00.837000 audit: BPF prog-id=12 op=LOAD Jul 2 07:03:00.837000 audit: BPF prog-id=3 op=UNLOAD Jul 2 07:03:00.837000 audit: BPF prog-id=13 op=LOAD Jul 2 07:03:00.837000 audit: BPF prog-id=14 op=LOAD Jul 2 07:03:00.837000 audit: BPF prog-id=4 op=UNLOAD Jul 2 07:03:00.837000 audit: BPF prog-id=5 op=UNLOAD Jul 2 07:03:00.837000 audit: BPF prog-id=15 op=LOAD Jul 2 07:03:00.838000 audit: BPF prog-id=12 op=UNLOAD Jul 2 07:03:00.838000 audit: BPF prog-id=16 op=LOAD Jul 2 07:03:00.838000 audit: BPF prog-id=17 op=LOAD Jul 2 07:03:00.838000 audit: BPF prog-id=13 op=UNLOAD Jul 2 07:03:00.838000 audit: BPF prog-id=14 op=UNLOAD Jul 2 07:03:00.839000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:00.850000 audit: BPF prog-id=15 op=UNLOAD Jul 2 07:03:00.857000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=iscsid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:00.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:00.868000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=initrd-switch-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.192000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.200000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.205000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.205000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.207000 audit: BPF prog-id=18 op=LOAD Jul 2 07:03:01.207000 audit: BPF prog-id=19 op=LOAD Jul 2 07:03:01.207000 audit: BPF prog-id=20 op=LOAD Jul 2 07:03:01.207000 audit: BPF prog-id=16 op=UNLOAD Jul 2 07:03:01.207000 audit: BPF prog-id=17 op=UNLOAD Jul 2 07:03:01.279000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=verity-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.300000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jul 2 07:03:01.300000 audit[1237]: SYSCALL arch=c000003e syscall=46 success=yes exit=60 a0=3 a1=7ffdba8c9820 a2=4000 a3=7ffdba8c98bc items=0 ppid=1 pid=1237 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:03:01.300000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jul 2 07:03:00.829198 systemd[1]: Queued start job for default target multi-user.target. Jul 2 07:03:00.829209 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jul 2 07:03:00.839634 systemd[1]: systemd-journald.service: Deactivated successfully. Jul 2 07:03:01.319374 systemd[1]: Started systemd-journald.service - Journal Service. Jul 2 07:03:01.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.345000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.349000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.352000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.352000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.356000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.356000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.362000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.362000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.366000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.366000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.320318 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jul 2 07:03:01.323312 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jul 2 07:03:01.326090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jul 2 07:03:01.343517 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jul 2 07:03:01.346750 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jul 2 07:03:01.350071 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jul 2 07:03:01.350227 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jul 2 07:03:01.353541 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:03:01.353741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:03:01.357098 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:03:01.357272 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:03:01.360351 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jul 2 07:03:01.360505 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jul 2 07:03:01.363787 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:03:01.363955 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:03:01.367116 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jul 2 07:03:01.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.370432 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jul 2 07:03:01.376514 kernel: ACPI: bus type drm_connector registered Jul 2 07:03:01.375000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.376894 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:03:01.377050 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:03:01.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.379000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.379972 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jul 2 07:03:01.382000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.383646 systemd[1]: Reached target network-pre.target - Preparation for Network. Jul 2 07:03:01.390817 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jul 2 07:03:01.395380 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jul 2 07:03:01.398385 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jul 2 07:03:01.400358 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jul 2 07:03:01.404418 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jul 2 07:03:01.406941 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:03:01.408591 systemd[1]: Starting systemd-random-seed.service - Load/Save Random Seed... Jul 2 07:03:01.411167 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:03:01.412828 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jul 2 07:03:01.417439 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jul 2 07:03:01.424213 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jul 2 07:03:01.426000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.427525 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jul 2 07:03:01.438940 systemd-journald[1237]: Time spent on flushing to /var/log/journal/3193ebcca13e4b3da246de91ea6a73a7 is 18.134ms for 1090 entries. Jul 2 07:03:01.438940 systemd-journald[1237]: System Journal (/var/log/journal/3193ebcca13e4b3da246de91ea6a73a7) is 8.0M, max 2.6G, 2.6G free. Jul 2 07:03:01.520768 systemd-journald[1237]: Received client request to flush runtime journal. Jul 2 07:03:01.458000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.430665 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jul 2 07:03:01.521183 udevadm[1255]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jul 2 07:03:01.449155 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jul 2 07:03:01.456116 systemd[1]: Finished systemd-random-seed.service - Load/Save Random Seed. Jul 2 07:03:01.459343 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jul 2 07:03:01.522224 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jul 2 07:03:01.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.538852 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jul 2 07:03:01.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:01.716726 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jul 2 07:03:02.946018 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jul 2 07:03:02.948000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:02.949000 audit: BPF prog-id=21 op=LOAD Jul 2 07:03:02.949000 audit: BPF prog-id=22 op=LOAD Jul 2 07:03:02.949000 audit: BPF prog-id=7 op=UNLOAD Jul 2 07:03:02.949000 audit: BPF prog-id=8 op=UNLOAD Jul 2 07:03:02.952976 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jul 2 07:03:02.984382 systemd-udevd[1261]: Using default interface naming scheme 'v252'. Jul 2 07:03:03.142000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.144000 audit: BPF prog-id=23 op=LOAD Jul 2 07:03:03.139835 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jul 2 07:03:03.150895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jul 2 07:03:03.193734 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jul 2 07:03:03.222739 kernel: BTRFS info: devid 1 device path /dev/mapper/usr changed to /dev/dm-0 scanned by (udev-worker) (1277) Jul 2 07:03:03.244000 audit: BPF prog-id=24 op=LOAD Jul 2 07:03:03.244000 audit: BPF prog-id=25 op=LOAD Jul 2 07:03:03.244000 audit: BPF prog-id=26 op=LOAD Jul 2 07:03:03.250926 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jul 2 07:03:03.315710 kernel: hv_vmbus: registering driver hyperv_fb Jul 2 07:03:03.321868 kernel: hv_vmbus: registering driver hv_balloon Jul 2 07:03:03.321955 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 Jul 2 07:03:03.325386 kernel: mousedev: PS/2 mouse device common for all mice Jul 2 07:03:03.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.332639 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jul 2 07:03:03.348914 kernel: hyperv_fb: Synthvid Version major 3, minor 5 Jul 2 07:03:03.349021 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 Jul 2 07:03:03.353857 kernel: Console: switching to colour dummy device 80x25 Jul 2 07:03:03.360424 kernel: Console: switching to colour frame buffer device 128x48 Jul 2 07:03:03.360752 kernel: hv_utils: Registering HyperV Utility Driver Jul 2 07:03:03.360802 kernel: hv_vmbus: registering driver hv_utils Jul 2 07:03:03.360830 kernel: hv_utils: Shutdown IC version 3.2 Jul 2 07:03:03.360858 kernel: hv_utils: Heartbeat IC version 3.0 Jul 2 07:03:03.360895 kernel: hv_utils: TimeSync IC version 4.0 Jul 2 07:03:03.356776 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1264) Jul 2 07:03:03.466128 systemd-journald[1237]: Time jumped backwards, rotating. Jul 2 07:03:03.478036 systemd-networkd[1273]: lo: Link UP Jul 2 07:03:03.478048 systemd-networkd[1273]: lo: Gained carrier Jul 2 07:03:03.478620 systemd-networkd[1273]: Enumeration completed Jul 2 07:03:03.483548 systemd[1]: Started systemd-networkd.service - Network Configuration. Jul 2 07:03:03.496173 kernel: kauditd_printk_skb: 59 callbacks suppressed Jul 2 07:03:03.496258 kernel: audit: type=1130 audit(1719903783.486:150): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.486000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.489364 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. Jul 2 07:03:03.502790 systemd-networkd[1273]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:03:03.502805 systemd-networkd[1273]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:03:03.505042 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jul 2 07:03:03.556779 kernel: mlx5_core 5551:00:02.0 enP21841s1: Link up Jul 2 07:03:03.574777 kernel: hv_netvsc 000d3a69-55c9-000d-3a69-55c9000d3a69 eth0: Data path switched to VF: enP21841s1 Jul 2 07:03:03.575226 systemd-networkd[1273]: enP21841s1: Link UP Jul 2 07:03:03.575428 systemd-networkd[1273]: eth0: Link UP Jul 2 07:03:03.575438 systemd-networkd[1273]: eth0: Gained carrier Jul 2 07:03:03.575469 systemd-networkd[1273]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:03:03.579059 systemd-networkd[1273]: enP21841s1: Gained carrier Jul 2 07:03:03.607904 systemd-networkd[1273]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:03:03.650822 kernel: KVM: vmx: using Hyper-V Enlightened VMCS Jul 2 07:03:03.680141 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jul 2 07:03:03.682000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.690773 kernel: audit: type=1130 audit(1719903783.682:151): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-settle comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.693041 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jul 2 07:03:03.734289 lvm[1344]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:03:03.758805 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jul 2 07:03:03.758000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.761961 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jul 2 07:03:03.771740 kernel: audit: type=1130 audit(1719903783.758:152): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.778982 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jul 2 07:03:03.783637 lvm[1345]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jul 2 07:03:03.808919 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jul 2 07:03:03.811000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.812130 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jul 2 07:03:03.821533 kernel: audit: type=1130 audit(1719903783.811:153): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=lvm2-activation comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.821484 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jul 2 07:03:03.821521 systemd[1]: Reached target local-fs.target - Local File Systems. Jul 2 07:03:03.824387 systemd[1]: Reached target machines.target - Containers. Jul 2 07:03:03.833986 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jul 2 07:03:03.837156 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:03:03.837267 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:03.838925 systemd[1]: Starting systemd-boot-update.service - Automatic Boot Loader Update... Jul 2 07:03:03.843779 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jul 2 07:03:03.848944 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jul 2 07:03:03.854001 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jul 2 07:03:03.859121 systemd[1]: boot.automount: Got automount request for /boot, triggered by 1347 (bootctl) Jul 2 07:03:03.861459 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM... Jul 2 07:03:03.886665 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jul 2 07:03:03.897554 kernel: loop0: detected capacity change from 0 to 211296 Jul 2 07:03:03.897686 kernel: audit: type=1130 audit(1719903783.886:154): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:03.886000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:05.442999 systemd-networkd[1273]: eth0: Gained IPv6LL Jul 2 07:03:05.448645 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jul 2 07:03:05.458580 kernel: audit: type=1130 audit(1719903785.448:155): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:05.448000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-networkd-wait-online comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.154000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.154188 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jul 2 07:03:06.155149 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jul 2 07:03:06.164789 kernel: audit: type=1130 audit(1719903786.154:156): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.169768 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jul 2 07:03:06.191775 kernel: loop1: detected capacity change from 0 to 139360 Jul 2 07:03:06.475728 systemd-fsck[1354]: fsck.fat 4.2 (2021-01-31) Jul 2 07:03:06.475728 systemd-fsck[1354]: /dev/sda1: 808 files, 120378/258078 clusters Jul 2 07:03:06.478161 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM.service - File System Check on /dev/disk/by-label/EFI-SYSTEM. Jul 2 07:03:06.480000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.493895 kernel: audit: type=1130 audit(1719903786.480:157): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-EFI\x2dSYSTEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.494001 systemd[1]: Mounting boot.mount - Boot partition... Jul 2 07:03:06.499774 kernel: loop2: detected capacity change from 0 to 80600 Jul 2 07:03:06.506960 systemd[1]: Mounted boot.mount - Boot partition. Jul 2 07:03:06.521276 systemd[1]: Finished systemd-boot-update.service - Automatic Boot Loader Update. Jul 2 07:03:06.523000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.532804 kernel: audit: type=1130 audit(1719903786.523:158): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-boot-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:06.979775 kernel: loop3: detected capacity change from 0 to 55568 Jul 2 07:03:07.400783 kernel: loop4: detected capacity change from 0 to 211296 Jul 2 07:03:07.408953 kernel: loop5: detected capacity change from 0 to 139360 Jul 2 07:03:07.419770 kernel: loop6: detected capacity change from 0 to 80600 Jul 2 07:03:07.427772 kernel: loop7: detected capacity change from 0 to 55568 Jul 2 07:03:07.431458 (sd-sysext)[1364]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. Jul 2 07:03:07.431961 (sd-sysext)[1364]: Merged extensions into '/usr'. Jul 2 07:03:07.433661 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jul 2 07:03:07.436000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.444768 kernel: audit: type=1130 audit(1719903787.436:159): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.446062 systemd[1]: Starting ensure-sysext.service... Jul 2 07:03:07.449801 systemd[1]: Starting systemd-tmpfiles-setup.service - Create Volatile Files and Directories... Jul 2 07:03:07.477081 systemd[1]: Reloading. Jul 2 07:03:07.515284 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/legacy.conf:13: Duplicate line for path "/run/lock", ignoring. Jul 2 07:03:07.528887 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jul 2 07:03:07.529865 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jul 2 07:03:07.543848 systemd-tmpfiles[1366]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jul 2 07:03:07.761566 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:03:07.836000 audit: BPF prog-id=27 op=LOAD Jul 2 07:03:07.836000 audit: BPF prog-id=28 op=LOAD Jul 2 07:03:07.836000 audit: BPF prog-id=21 op=UNLOAD Jul 2 07:03:07.836000 audit: BPF prog-id=22 op=UNLOAD Jul 2 07:03:07.836000 audit: BPF prog-id=29 op=LOAD Jul 2 07:03:07.836000 audit: BPF prog-id=23 op=UNLOAD Jul 2 07:03:07.837000 audit: BPF prog-id=30 op=LOAD Jul 2 07:03:07.837000 audit: BPF prog-id=18 op=UNLOAD Jul 2 07:03:07.837000 audit: BPF prog-id=31 op=LOAD Jul 2 07:03:07.837000 audit: BPF prog-id=32 op=LOAD Jul 2 07:03:07.837000 audit: BPF prog-id=19 op=UNLOAD Jul 2 07:03:07.837000 audit: BPF prog-id=20 op=UNLOAD Jul 2 07:03:07.838000 audit: BPF prog-id=33 op=LOAD Jul 2 07:03:07.838000 audit: BPF prog-id=24 op=UNLOAD Jul 2 07:03:07.838000 audit: BPF prog-id=34 op=LOAD Jul 2 07:03:07.838000 audit: BPF prog-id=35 op=LOAD Jul 2 07:03:07.838000 audit: BPF prog-id=25 op=UNLOAD Jul 2 07:03:07.838000 audit: BPF prog-id=26 op=UNLOAD Jul 2 07:03:07.844466 systemd[1]: Finished systemd-tmpfiles-setup.service - Create Volatile Files and Directories. Jul 2 07:03:07.847000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.858175 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jul 2 07:03:07.882002 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jul 2 07:03:07.886091 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jul 2 07:03:07.889000 audit: BPF prog-id=36 op=LOAD Jul 2 07:03:07.891302 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jul 2 07:03:07.894000 audit: BPF prog-id=37 op=LOAD Jul 2 07:03:07.896780 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jul 2 07:03:07.901010 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jul 2 07:03:07.916589 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:07.916967 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:03:07.918894 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:03:07.920000 audit[1451]: SYSTEM_BOOT pid=1451 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.924127 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:03:07.937205 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:03:07.940269 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:03:07.940494 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:07.940692 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:07.942173 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:03:07.942405 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:03:07.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.944000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.954943 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jul 2 07:03:07.957000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.960000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.960000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.958506 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:03:07.958649 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:03:07.961863 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:03:07.964592 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:03:07.964801 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:03:07.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:07.968195 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:03:07.971159 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:07.971671 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:03:07.978198 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:03:07.983415 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jul 2 07:03:07.997203 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jul 2 07:03:08.000143 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:03:08.000366 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:08.000584 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:08.004142 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jul 2 07:03:08.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.008027 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:03:08.008205 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:03:08.012000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.012000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.018064 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:08.018481 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jul 2 07:03:08.024256 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jul 2 07:03:08.040315 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jul 2 07:03:08.043391 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jul 2 07:03:08.043608 systemd[1]: systemd-boot-system-token.service - Store a System Token in an EFI Variable was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/LoaderFeatures-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:08.043847 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Jul 2 07:03:08.045011 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jul 2 07:03:08.045197 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jul 2 07:03:08.049000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.049000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.050695 systemd[1]: modprobe@loop.service: Deactivated successfully. Jul 2 07:03:08.050920 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jul 2 07:03:08.053000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.054237 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jul 2 07:03:08.054404 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jul 2 07:03:08.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.057823 systemd[1]: modprobe@drm.service: Deactivated successfully. Jul 2 07:03:08.057981 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jul 2 07:03:08.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.061368 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jul 2 07:03:08.061501 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jul 2 07:03:08.063862 systemd[1]: Finished ensure-sysext.service. Jul 2 07:03:08.065000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.081118 systemd-resolved[1449]: Positive Trust Anchors: Jul 2 07:03:08.081132 systemd-resolved[1449]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jul 2 07:03:08.081173 systemd-resolved[1449]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa corp home internal intranet lan local private test Jul 2 07:03:08.096866 systemd-resolved[1449]: Using system hostname 'ci-3815.2.5-a-54ab6c74aa'. Jul 2 07:03:08.098519 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jul 2 07:03:08.100000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.101838 systemd[1]: Reached target network.target - Network. Jul 2 07:03:08.104216 systemd[1]: Reached target network-online.target - Network is Online. Jul 2 07:03:08.107678 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jul 2 07:03:08.120107 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jul 2 07:03:08.122000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-timesyncd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jul 2 07:03:08.123220 systemd[1]: Reached target time-set.target - System Time Set. Jul 2 07:03:08.133000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jul 2 07:03:08.133000 audit[1473]: SYSCALL arch=c000003e syscall=44 success=yes exit=1056 a0=3 a1=7ffdde454300 a2=420 a3=0 items=0 ppid=1445 pid=1473 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/sbin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jul 2 07:03:08.134375 augenrules[1473]: No rules Jul 2 07:03:08.133000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jul 2 07:03:08.134938 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jul 2 07:03:08.394124 systemd-timesyncd[1450]: Contacted time server 89.234.64.77:123 (0.flatcar.pool.ntp.org). Jul 2 07:03:08.394448 systemd-timesyncd[1450]: Initial clock synchronization to Tue 2024-07-02 07:03:08.394092 UTC. Jul 2 07:03:08.816330 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jul 2 07:03:08.820003 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jul 2 07:03:10.700925 ldconfig[1346]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jul 2 07:03:10.833378 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jul 2 07:03:10.845568 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jul 2 07:03:10.856551 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jul 2 07:03:10.859933 systemd[1]: Reached target sysinit.target - System Initialization. Jul 2 07:03:10.862895 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jul 2 07:03:10.865778 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jul 2 07:03:10.868682 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jul 2 07:03:10.871837 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jul 2 07:03:10.874611 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jul 2 07:03:10.877653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jul 2 07:03:10.877688 systemd[1]: Reached target paths.target - Path Units. Jul 2 07:03:10.880417 systemd[1]: Reached target timers.target - Timer Units. Jul 2 07:03:10.883562 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jul 2 07:03:10.887975 systemd[1]: Starting docker.socket - Docker Socket for the API... Jul 2 07:03:10.895443 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jul 2 07:03:10.898509 systemd[1]: systemd-pcrphase-sysinit.service - TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:10.898970 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jul 2 07:03:10.901883 systemd[1]: Reached target sockets.target - Socket Units. Jul 2 07:03:10.904403 systemd[1]: Reached target basic.target - Basic System. Jul 2 07:03:10.906850 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:03:10.906880 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jul 2 07:03:10.916885 systemd[1]: Starting containerd.service - containerd container runtime... Jul 2 07:03:10.921699 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jul 2 07:03:10.925980 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jul 2 07:03:10.929906 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jul 2 07:03:10.934475 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jul 2 07:03:10.937172 jq[1487]: false Jul 2 07:03:10.937533 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jul 2 07:03:10.940306 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:03:10.945584 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jul 2 07:03:10.949734 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jul 2 07:03:10.954073 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jul 2 07:03:10.958439 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jul 2 07:03:10.962818 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jul 2 07:03:10.969770 systemd[1]: Starting systemd-logind.service - User Login Management... Jul 2 07:03:10.972673 systemd[1]: systemd-pcrphase.service - TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f). Jul 2 07:03:10.972781 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jul 2 07:03:10.973314 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jul 2 07:03:10.976949 systemd[1]: Starting update-engine.service - Update Engine... Jul 2 07:03:10.981328 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jul 2 07:03:10.987313 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jul 2 07:03:10.987612 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jul 2 07:03:10.991379 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jul 2 07:03:10.991681 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jul 2 07:03:11.007088 jq[1505]: true Jul 2 07:03:11.020774 jq[1513]: true Jul 2 07:03:11.036074 extend-filesystems[1488]: Found loop4 Jul 2 07:03:11.040868 extend-filesystems[1488]: Found loop5 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found loop6 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found loop7 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda1 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda2 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda3 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found usr Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda4 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda6 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda7 Jul 2 07:03:11.042844 extend-filesystems[1488]: Found sda9 Jul 2 07:03:11.042844 extend-filesystems[1488]: Checking size of /dev/sda9 Jul 2 07:03:11.073003 update_engine[1501]: I0702 07:03:11.045794 1501 main.cc:92] Flatcar Update Engine starting Jul 2 07:03:11.046507 systemd[1]: motdgen.service: Deactivated successfully. Jul 2 07:03:11.048334 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jul 2 07:03:11.089457 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jul 2 07:03:11.094529 extend-filesystems[1488]: Old size kept for /dev/sda9 Jul 2 07:03:11.102027 extend-filesystems[1488]: Found sr0 Jul 2 07:03:11.096240 systemd[1]: extend-filesystems.service: Deactivated successfully. Jul 2 07:03:11.096387 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jul 2 07:03:11.137903 systemd-logind[1500]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Jul 2 07:03:11.139937 systemd-logind[1500]: New seat seat0. Jul 2 07:03:11.143214 dbus-daemon[1484]: [system] SELinux support is enabled Jul 2 07:03:11.143384 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jul 2 07:03:11.147771 update_engine[1501]: I0702 07:03:11.147663 1501 update_check_scheduler.cc:74] Next update check in 4m19s Jul 2 07:03:11.154789 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jul 2 07:03:11.154823 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jul 2 07:03:11.158003 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jul 2 07:03:11.158031 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jul 2 07:03:11.161299 systemd[1]: Started update-engine.service - Update Engine. Jul 2 07:03:11.161830 dbus-daemon[1484]: [system] Successfully activated service 'org.freedesktop.systemd1' Jul 2 07:03:11.163974 systemd[1]: Started systemd-logind.service - User Login Management. Jul 2 07:03:11.173213 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jul 2 07:03:11.178655 tar[1508]: linux-amd64/helm Jul 2 07:03:11.283411 bash[1534]: Updated "/home/core/.ssh/authorized_keys" Jul 2 07:03:11.284319 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jul 2 07:03:11.289158 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. Jul 2 07:03:11.325259 coreos-metadata[1483]: Jul 02 07:03:11.325 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 Jul 2 07:03:11.330428 coreos-metadata[1483]: Jul 02 07:03:11.330 INFO Fetch successful Jul 2 07:03:11.330428 coreos-metadata[1483]: Jul 02 07:03:11.330 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 Jul 2 07:03:11.335671 coreos-metadata[1483]: Jul 02 07:03:11.335 INFO Fetch successful Jul 2 07:03:11.339159 coreos-metadata[1483]: Jul 02 07:03:11.338 INFO Fetching http://168.63.129.16/machine/f852cc80-23f5-4e76-b469-b0586cab9ed0/1e275177%2D1007%2D4523%2D80da%2Ddc860fccfa7c.%5Fci%2D3815.2.5%2Da%2D54ab6c74aa?comp=config&type=sharedConfig&incarnation=1: Attempt #1 Jul 2 07:03:11.340865 coreos-metadata[1483]: Jul 02 07:03:11.340 INFO Fetch successful Jul 2 07:03:11.340968 coreos-metadata[1483]: Jul 02 07:03:11.340 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 Jul 2 07:03:11.353426 coreos-metadata[1483]: Jul 02 07:03:11.353 INFO Fetch successful Jul 2 07:03:11.374375 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jul 2 07:03:11.378275 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jul 2 07:03:11.508128 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1545) Jul 2 07:03:11.654562 locksmithd[1540]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jul 2 07:03:12.340787 containerd[1511]: time="2024-07-02T07:03:12.340671464Z" level=info msg="starting containerd" revision=99b8088b873ba42b788f29ccd0dc26ebb6952f1e version=v1.7.13 Jul 2 07:03:12.439255 containerd[1511]: time="2024-07-02T07:03:12.439198530Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jul 2 07:03:12.440527 containerd[1511]: time="2024-07-02T07:03:12.440500336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.444739 containerd[1511]: time="2024-07-02T07:03:12.444701256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.1.96-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:03:12.444876 containerd[1511]: time="2024-07-02T07:03:12.444859256Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.445226 containerd[1511]: time="2024-07-02T07:03:12.445203058Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:03:12.445313 containerd[1511]: time="2024-07-02T07:03:12.445299958Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jul 2 07:03:12.446093 containerd[1511]: time="2024-07-02T07:03:12.446071162Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.446239 containerd[1511]: time="2024-07-02T07:03:12.446219963Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:03:12.446310 containerd[1511]: time="2024-07-02T07:03:12.446297163Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.446438 containerd[1511]: time="2024-07-02T07:03:12.446424464Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.446725 containerd[1511]: time="2024-07-02T07:03:12.446707065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.447697 containerd[1511]: time="2024-07-02T07:03:12.447673970Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured" Jul 2 07:03:12.447798 containerd[1511]: time="2024-07-02T07:03:12.447782570Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jul 2 07:03:12.448066 containerd[1511]: time="2024-07-02T07:03:12.448046371Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jul 2 07:03:12.448144 containerd[1511]: time="2024-07-02T07:03:12.448126772Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jul 2 07:03:12.448263 containerd[1511]: time="2024-07-02T07:03:12.448248572Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured" Jul 2 07:03:12.448328 containerd[1511]: time="2024-07-02T07:03:12.448316473Z" level=info msg="metadata content store policy set" policy=shared Jul 2 07:03:12.468551 containerd[1511]: time="2024-07-02T07:03:12.468513368Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jul 2 07:03:12.468743 containerd[1511]: time="2024-07-02T07:03:12.468685069Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jul 2 07:03:12.468743 containerd[1511]: time="2024-07-02T07:03:12.468710169Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.468878670Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.468903570Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.468918570Z" level=info msg="NRI interface is disabled by configuration." Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.468971570Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469108871Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469130171Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469150371Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469169471Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469189171Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469210671Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469228471Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469279672Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469304472Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.469535 containerd[1511]: time="2024-07-02T07:03:12.469323772Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.470037 containerd[1511]: time="2024-07-02T07:03:12.469341972Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.470037 containerd[1511]: time="2024-07-02T07:03:12.469358672Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jul 2 07:03:12.470037 containerd[1511]: time="2024-07-02T07:03:12.469468173Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jul 2 07:03:12.470503 containerd[1511]: time="2024-07-02T07:03:12.470483877Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jul 2 07:03:12.470593 containerd[1511]: time="2024-07-02T07:03:12.470579078Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.470672 containerd[1511]: time="2024-07-02T07:03:12.470659678Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jul 2 07:03:12.470768 containerd[1511]: time="2024-07-02T07:03:12.470739579Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jul 2 07:03:12.470910 containerd[1511]: time="2024-07-02T07:03:12.470896979Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471045 containerd[1511]: time="2024-07-02T07:03:12.471030980Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471119 containerd[1511]: time="2024-07-02T07:03:12.471107480Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471181 containerd[1511]: time="2024-07-02T07:03:12.471169581Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471244 containerd[1511]: time="2024-07-02T07:03:12.471232481Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471374 containerd[1511]: time="2024-07-02T07:03:12.471361682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471442 containerd[1511]: time="2024-07-02T07:03:12.471430882Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471505 containerd[1511]: time="2024-07-02T07:03:12.471493382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471570 containerd[1511]: time="2024-07-02T07:03:12.471558083Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jul 2 07:03:12.471783 containerd[1511]: time="2024-07-02T07:03:12.471765183Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471870 containerd[1511]: time="2024-07-02T07:03:12.471857084Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471933 containerd[1511]: time="2024-07-02T07:03:12.471921984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.471995 containerd[1511]: time="2024-07-02T07:03:12.471984285Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.472059 containerd[1511]: time="2024-07-02T07:03:12.472047585Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.472129 containerd[1511]: time="2024-07-02T07:03:12.472117685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.472191 containerd[1511]: time="2024-07-02T07:03:12.472179985Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.472259 containerd[1511]: time="2024-07-02T07:03:12.472247486Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jul 2 07:03:12.472715 containerd[1511]: time="2024-07-02T07:03:12.472643288Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jul 2 07:03:12.473017 containerd[1511]: time="2024-07-02T07:03:12.473001789Z" level=info msg="Connect containerd service" Jul 2 07:03:12.473118 containerd[1511]: time="2024-07-02T07:03:12.473106390Z" level=info msg="using legacy CRI server" Jul 2 07:03:12.473176 containerd[1511]: time="2024-07-02T07:03:12.473164990Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jul 2 07:03:12.473268 containerd[1511]: time="2024-07-02T07:03:12.473254291Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jul 2 07:03:12.490116 tar[1508]: linux-amd64/LICENSE Jul 2 07:03:12.490545 tar[1508]: linux-amd64/README.md Jul 2 07:03:12.496058 containerd[1511]: time="2024-07-02T07:03:12.495712597Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jul 2 07:03:12.496058 containerd[1511]: time="2024-07-02T07:03:12.495790497Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jul 2 07:03:12.496058 containerd[1511]: time="2024-07-02T07:03:12.495817397Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1 Jul 2 07:03:12.496058 containerd[1511]: time="2024-07-02T07:03:12.495833097Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jul 2 07:03:12.496058 containerd[1511]: time="2024-07-02T07:03:12.495848397Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin" Jul 2 07:03:12.496279 containerd[1511]: time="2024-07-02T07:03:12.496246999Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jul 2 07:03:12.496329 containerd[1511]: time="2024-07-02T07:03:12.496308800Z" level=info msg=serving... address=/run/containerd/containerd.sock Jul 2 07:03:12.496400 containerd[1511]: time="2024-07-02T07:03:12.496370500Z" level=info msg="Start subscribing containerd event" Jul 2 07:03:12.496439 containerd[1511]: time="2024-07-02T07:03:12.496421300Z" level=info msg="Start recovering state" Jul 2 07:03:12.496517 containerd[1511]: time="2024-07-02T07:03:12.496501600Z" level=info msg="Start event monitor" Jul 2 07:03:12.496563 containerd[1511]: time="2024-07-02T07:03:12.496525501Z" level=info msg="Start snapshots syncer" Jul 2 07:03:12.496563 containerd[1511]: time="2024-07-02T07:03:12.496539101Z" level=info msg="Start cni network conf syncer for default" Jul 2 07:03:12.496563 containerd[1511]: time="2024-07-02T07:03:12.496549601Z" level=info msg="Start streaming server" Jul 2 07:03:12.496671 containerd[1511]: time="2024-07-02T07:03:12.496626901Z" level=info msg="containerd successfully booted in 0.162619s" Jul 2 07:03:12.499380 systemd[1]: Started containerd.service - containerd container runtime. Jul 2 07:03:12.507313 sshd_keygen[1515]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jul 2 07:03:12.516484 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jul 2 07:03:12.541624 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jul 2 07:03:12.551184 systemd[1]: Starting issuegen.service - Generate /run/issue... Jul 2 07:03:12.556032 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... Jul 2 07:03:12.567945 systemd[1]: issuegen.service: Deactivated successfully. Jul 2 07:03:12.568163 systemd[1]: Finished issuegen.service - Generate /run/issue. Jul 2 07:03:12.572844 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jul 2 07:03:12.577118 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. Jul 2 07:03:12.588591 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jul 2 07:03:12.600189 systemd[1]: Started getty@tty1.service - Getty on tty1. Jul 2 07:03:12.605245 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jul 2 07:03:12.608694 systemd[1]: Reached target getty.target - Login Prompts. Jul 2 07:03:12.625826 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:12.629391 systemd[1]: Reached target multi-user.target - Multi-User System. Jul 2 07:03:12.633865 systemd[1]: Starting systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP... Jul 2 07:03:12.645776 systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully. Jul 2 07:03:12.645965 systemd[1]: Finished systemd-update-utmp-runlevel.service - Record Runlevel Change in UTMP. Jul 2 07:03:12.649210 systemd[1]: Startup finished in 787ms (firmware) + 23.070s (loader) + 994ms (kernel) + 9.498s (initrd) + 14.579s (userspace) = 48.930s. Jul 2 07:03:13.028399 login[1609]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:03:13.030494 login[1610]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) Jul 2 07:03:13.040424 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jul 2 07:03:13.051876 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jul 2 07:03:13.055956 systemd-logind[1500]: New session 2 of user core. Jul 2 07:03:13.063802 systemd-logind[1500]: New session 1 of user core. Jul 2 07:03:13.070007 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jul 2 07:03:13.074268 systemd[1]: Starting user@500.service - User Manager for UID 500... Jul 2 07:03:13.078454 (systemd)[1621]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:03:13.252296 systemd[1621]: Queued start job for default target default.target. Jul 2 07:03:13.257539 systemd[1621]: Reached target paths.target - Paths. Jul 2 07:03:13.257562 systemd[1621]: Reached target sockets.target - Sockets. Jul 2 07:03:13.257578 systemd[1621]: Reached target timers.target - Timers. Jul 2 07:03:13.257591 systemd[1621]: Reached target basic.target - Basic System. Jul 2 07:03:13.257718 systemd[1]: Started user@500.service - User Manager for UID 500. Jul 2 07:03:13.259387 systemd[1]: Started session-1.scope - Session 1 of User core. Jul 2 07:03:13.260299 systemd[1]: Started session-2.scope - Session 2 of User core. Jul 2 07:03:13.263161 systemd[1621]: Reached target default.target - Main User Target. Jul 2 07:03:13.263238 systemd[1621]: Startup finished in 175ms. Jul 2 07:03:13.275777 kubelet[1613]: E0702 07:03:13.275706 1613 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:03:13.278962 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:03:13.279137 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:03:13.629831 waagent[1608]: 2024-07-02T07:03:13.629650Z INFO Daemon Daemon Azure Linux Agent Version: 2.9.1.1 Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.630101Z INFO Daemon Daemon OS: flatcar 3815.2.5 Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.631722Z INFO Daemon Daemon Python: 3.11.6 Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.632369Z INFO Daemon Daemon Run daemon Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.632698Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='3815.2.5' Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.633487Z INFO Daemon Daemon Using waagent for provisioning Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.634584Z INFO Daemon Daemon Activate resource disk Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.635511Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.640179Z INFO Daemon Daemon Found device: None Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.640993Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.641422Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.642605Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:03:13.665452 waagent[1608]: 2024-07-02T07:03:13.643741Z INFO Daemon Daemon Running default provisioning handler Jul 2 07:03:13.668391 waagent[1608]: 2024-07-02T07:03:13.668297Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 1. Jul 2 07:03:13.675307 waagent[1608]: 2024-07-02T07:03:13.675248Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' Jul 2 07:03:13.684126 waagent[1608]: 2024-07-02T07:03:13.675461Z INFO Daemon Daemon cloud-init is enabled: False Jul 2 07:03:13.684126 waagent[1608]: 2024-07-02T07:03:13.676325Z INFO Daemon Daemon Copying ovf-env.xml Jul 2 07:03:13.779191 waagent[1608]: 2024-07-02T07:03:13.775545Z INFO Daemon Daemon Successfully mounted dvd Jul 2 07:03:13.892863 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. Jul 2 07:03:13.905718 waagent[1608]: 2024-07-02T07:03:13.905633Z INFO Daemon Daemon Detect protocol endpoint Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.906023Z INFO Daemon Daemon Clean protocol and wireserver endpoint Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.907310Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.908403Z INFO Daemon Daemon Test for route to 168.63.129.16 Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.909568Z INFO Daemon Daemon Route to 168.63.129.16 exists Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.910463Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.920993Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.921981Z INFO Daemon Daemon Wire protocol version:2012-11-30 Jul 2 07:03:13.929126 waagent[1608]: 2024-07-02T07:03:13.922287Z INFO Daemon Daemon Server preferred version:2015-04-05 Jul 2 07:03:14.344167 waagent[1608]: 2024-07-02T07:03:14.344015Z INFO Daemon Daemon Initializing goal state during protocol detection Jul 2 07:03:14.348189 waagent[1608]: 2024-07-02T07:03:14.348120Z INFO Daemon Daemon Forcing an update of the goal state. Jul 2 07:03:14.355349 waagent[1608]: 2024-07-02T07:03:14.355293Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 07:03:14.371981 waagent[1608]: 2024-07-02T07:03:14.371926Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.151 Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.372587Z INFO Daemon Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.373715Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 6b83e3d3-094b-438d-9678-0320f92312ac eTag: 8606026594211586687 source: Fabric] Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.374445Z INFO Daemon The vmSettings originated via Fabric; will ignore them. Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.375141Z INFO Daemon Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.375596Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] Jul 2 07:03:14.390635 waagent[1608]: 2024-07-02T07:03:14.380005Z INFO Daemon Daemon Downloading artifacts profile blob Jul 2 07:03:14.461370 waagent[1608]: 2024-07-02T07:03:14.461285Z INFO Daemon Downloaded certificate {'thumbprint': 'EA8732612F37C88925BE96585852235F35AB169C', 'hasPrivateKey': False} Jul 2 07:03:14.465730 waagent[1608]: 2024-07-02T07:03:14.465669Z INFO Daemon Downloaded certificate {'thumbprint': 'A7A6032F94308DF9026CE8499000E6E16ACB1AD0', 'hasPrivateKey': True} Jul 2 07:03:14.471310 waagent[1608]: 2024-07-02T07:03:14.466270Z INFO Daemon Fetch goal state completed Jul 2 07:03:14.475264 waagent[1608]: 2024-07-02T07:03:14.475212Z INFO Daemon Daemon Starting provisioning Jul 2 07:03:14.482228 waagent[1608]: 2024-07-02T07:03:14.475466Z INFO Daemon Daemon Handle ovf-env.xml. Jul 2 07:03:14.482228 waagent[1608]: 2024-07-02T07:03:14.476495Z INFO Daemon Daemon Set hostname [ci-3815.2.5-a-54ab6c74aa] Jul 2 07:03:14.534555 waagent[1608]: 2024-07-02T07:03:14.534462Z INFO Daemon Daemon Publish hostname [ci-3815.2.5-a-54ab6c74aa] Jul 2 07:03:14.543376 waagent[1608]: 2024-07-02T07:03:14.535312Z INFO Daemon Daemon Examine /proc/net/route for primary interface Jul 2 07:03:14.543376 waagent[1608]: 2024-07-02T07:03:14.536196Z INFO Daemon Daemon Primary interface is [eth0] Jul 2 07:03:14.559980 systemd-networkd[1273]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jul 2 07:03:14.559990 systemd-networkd[1273]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jul 2 07:03:14.560041 systemd-networkd[1273]: eth0: DHCP lease lost Jul 2 07:03:14.561230 waagent[1608]: 2024-07-02T07:03:14.561149Z INFO Daemon Daemon Create user account if not exists Jul 2 07:03:14.577843 waagent[1608]: 2024-07-02T07:03:14.561546Z INFO Daemon Daemon User core already exists, skip useradd Jul 2 07:03:14.577843 waagent[1608]: 2024-07-02T07:03:14.562481Z INFO Daemon Daemon Configure sudoer Jul 2 07:03:14.577843 waagent[1608]: 2024-07-02T07:03:14.563623Z INFO Daemon Daemon Configure sshd Jul 2 07:03:14.577843 waagent[1608]: 2024-07-02T07:03:14.564452Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. Jul 2 07:03:14.577843 waagent[1608]: 2024-07-02T07:03:14.565246Z INFO Daemon Daemon Deploy ssh public key. Jul 2 07:03:14.578831 systemd-networkd[1273]: eth0: DHCPv6 lease lost Jul 2 07:03:14.625833 systemd-networkd[1273]: eth0: DHCPv4 address 10.200.8.10/24, gateway 10.200.8.1 acquired from 168.63.129.16 Jul 2 07:03:23.530256 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jul 2 07:03:23.530592 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:23.539162 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:03:23.630782 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:24.156806 kubelet[1671]: E0702 07:03:24.156728 1671 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:03:24.160011 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:03:24.160193 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:03:34.411242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jul 2 07:03:34.411558 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:34.421186 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:03:34.557876 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:35.057332 kubelet[1681]: E0702 07:03:35.057265 1681 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:03:35.059161 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:03:35.059331 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:03:44.828143 waagent[1608]: 2024-07-02T07:03:44.828059Z INFO Daemon Daemon Provisioning complete Jul 2 07:03:44.842826 waagent[1608]: 2024-07-02T07:03:44.842768Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping Jul 2 07:03:44.850069 waagent[1608]: 2024-07-02T07:03:44.843087Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. Jul 2 07:03:44.850069 waagent[1608]: 2024-07-02T07:03:44.844037Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.9.1.1 is the most current agent Jul 2 07:03:44.968682 waagent[1687]: 2024-07-02T07:03:44.968584Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.9.1.1) Jul 2 07:03:44.969106 waagent[1687]: 2024-07-02T07:03:44.968770Z INFO ExtHandler ExtHandler OS: flatcar 3815.2.5 Jul 2 07:03:44.969106 waagent[1687]: 2024-07-02T07:03:44.968868Z INFO ExtHandler ExtHandler Python: 3.11.6 Jul 2 07:03:45.007449 waagent[1687]: 2024-07-02T07:03:45.007359Z INFO ExtHandler ExtHandler Distro: flatcar-3815.2.5; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.6; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; Jul 2 07:03:45.007679 waagent[1687]: 2024-07-02T07:03:45.007631Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:03:45.007795 waagent[1687]: 2024-07-02T07:03:45.007735Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:03:45.017500 waagent[1687]: 2024-07-02T07:03:45.017430Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] Jul 2 07:03:45.028988 waagent[1687]: 2024-07-02T07:03:45.028934Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.151 Jul 2 07:03:45.029459 waagent[1687]: 2024-07-02T07:03:45.029406Z INFO ExtHandler Jul 2 07:03:45.029548 waagent[1687]: 2024-07-02T07:03:45.029503Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: 253c531a-9354-44b1-a429-0d412d243af5 eTag: 8606026594211586687 source: Fabric] Jul 2 07:03:45.029887 waagent[1687]: 2024-07-02T07:03:45.029839Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. Jul 2 07:03:45.030464 waagent[1687]: 2024-07-02T07:03:45.030411Z INFO ExtHandler Jul 2 07:03:45.030543 waagent[1687]: 2024-07-02T07:03:45.030500Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] Jul 2 07:03:45.034439 waagent[1687]: 2024-07-02T07:03:45.034398Z INFO ExtHandler ExtHandler Downloading artifacts profile blob Jul 2 07:03:45.123957 waagent[1687]: 2024-07-02T07:03:45.123829Z INFO ExtHandler Downloaded certificate {'thumbprint': 'EA8732612F37C88925BE96585852235F35AB169C', 'hasPrivateKey': False} Jul 2 07:03:45.124509 waagent[1687]: 2024-07-02T07:03:45.124450Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A7A6032F94308DF9026CE8499000E6E16ACB1AD0', 'hasPrivateKey': True} Jul 2 07:03:45.125005 waagent[1687]: 2024-07-02T07:03:45.124954Z INFO ExtHandler Fetch goal state completed Jul 2 07:03:45.142184 waagent[1687]: 2024-07-02T07:03:45.142117Z INFO ExtHandler ExtHandler WALinuxAgent-2.9.1.1 running as process 1687 Jul 2 07:03:45.142342 waagent[1687]: 2024-07-02T07:03:45.142294Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** Jul 2 07:03:45.143930 waagent[1687]: 2024-07-02T07:03:45.143879Z INFO ExtHandler ExtHandler Cgroup monitoring is not supported on ['flatcar', '3815.2.5', '', 'Flatcar Container Linux by Kinvolk'] Jul 2 07:03:45.144321 waagent[1687]: 2024-07-02T07:03:45.144276Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules Jul 2 07:03:45.177438 waagent[1687]: 2024-07-02T07:03:45.177382Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service Jul 2 07:03:45.177949 waagent[1687]: 2024-07-02T07:03:45.177875Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup Jul 2 07:03:45.185590 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jul 2 07:03:45.185920 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:45.192250 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:03:45.196277 waagent[1687]: 2024-07-02T07:03:45.194155Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now Jul 2 07:03:45.205407 systemd[1]: Reloading. Jul 2 07:03:45.434988 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:03:45.522245 waagent[1687]: 2024-07-02T07:03:45.522151Z INFO ExtHandler ExtHandler Executing systemctl daemon-reload for setting up waagent-network-setup.service Jul 2 07:03:45.533966 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:45.534683 systemd[1]: Reloading. Jul 2 07:03:45.724612 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:03:45.810809 waagent[1687]: 2024-07-02T07:03:45.810691Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service Jul 2 07:03:45.810981 waagent[1687]: 2024-07-02T07:03:45.810930Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully Jul 2 07:03:46.791853 kubelet[1785]: E0702 07:03:46.791796 1785 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:03:46.793609 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:03:46.793803 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:03:47.219177 waagent[1687]: 2024-07-02T07:03:47.219021Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. Jul 2 07:03:47.219951 waagent[1687]: 2024-07-02T07:03:47.219881Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: configuration enabled [True], cgroups enabled [False], python supported: [True] Jul 2 07:03:47.220868 waagent[1687]: 2024-07-02T07:03:47.220801Z INFO ExtHandler ExtHandler Starting env monitor service. Jul 2 07:03:47.221417 waagent[1687]: 2024-07-02T07:03:47.221349Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. Jul 2 07:03:47.221567 waagent[1687]: 2024-07-02T07:03:47.221511Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:03:47.221704 waagent[1687]: 2024-07-02T07:03:47.221654Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:03:47.221965 waagent[1687]: 2024-07-02T07:03:47.221915Z INFO EnvHandler ExtHandler Configure routes Jul 2 07:03:47.222114 waagent[1687]: 2024-07-02T07:03:47.222064Z INFO EnvHandler ExtHandler Gateway:None Jul 2 07:03:47.222233 waagent[1687]: 2024-07-02T07:03:47.222183Z INFO EnvHandler ExtHandler Routes:None Jul 2 07:03:47.222888 waagent[1687]: 2024-07-02T07:03:47.222824Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread Jul 2 07:03:47.222989 waagent[1687]: 2024-07-02T07:03:47.222914Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file Jul 2 07:03:47.223126 waagent[1687]: 2024-07-02T07:03:47.223074Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 Jul 2 07:03:47.223419 waagent[1687]: 2024-07-02T07:03:47.223362Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. Jul 2 07:03:47.223595 waagent[1687]: 2024-07-02T07:03:47.223534Z INFO ExtHandler ExtHandler Start Extension Telemetry service. Jul 2 07:03:47.224002 waagent[1687]: 2024-07-02T07:03:47.223946Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: Jul 2 07:03:47.224002 waagent[1687]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT Jul 2 07:03:47.224002 waagent[1687]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 Jul 2 07:03:47.224002 waagent[1687]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 Jul 2 07:03:47.224002 waagent[1687]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:03:47.224002 waagent[1687]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:03:47.224002 waagent[1687]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 Jul 2 07:03:47.225194 waagent[1687]: 2024-07-02T07:03:47.225135Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. Jul 2 07:03:47.225194 waagent[1687]: 2024-07-02T07:03:47.225051Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True Jul 2 07:03:47.226514 waagent[1687]: 2024-07-02T07:03:47.226450Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread Jul 2 07:03:47.236929 waagent[1687]: 2024-07-02T07:03:47.236886Z INFO ExtHandler ExtHandler Jul 2 07:03:47.237115 waagent[1687]: 2024-07-02T07:03:47.237079Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 14c49d3f-6e08-4603-b923-3bae15bdce1a correlation c1e1fa5c-54a9-4785-a3b3-87ae7d86c321 created: 2024-07-02T07:02:11.691460Z] Jul 2 07:03:47.237651 waagent[1687]: 2024-07-02T07:03:47.237605Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. Jul 2 07:03:47.239375 waagent[1687]: 2024-07-02T07:03:47.239334Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 2 ms] Jul 2 07:03:47.278383 waagent[1687]: 2024-07-02T07:03:47.278327Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.9.1.1 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: 8CCADABB-9AF0-423A-8E89-388F62A8627D;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0] Jul 2 07:03:47.306128 waagent[1687]: 2024-07-02T07:03:47.306060Z INFO EnvHandler ExtHandler Successfully added Azure fabric firewall rules. Current Firewall rules: Jul 2 07:03:47.306128 waagent[1687]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.306128 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.306128 waagent[1687]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.306128 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.306128 waagent[1687]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.306128 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.306128 waagent[1687]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:03:47.306128 waagent[1687]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:03:47.306128 waagent[1687]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:03:47.309362 waagent[1687]: 2024-07-02T07:03:47.309306Z INFO EnvHandler ExtHandler Current Firewall rules: Jul 2 07:03:47.309362 waagent[1687]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.309362 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.309362 waagent[1687]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.309362 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.309362 waagent[1687]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) Jul 2 07:03:47.309362 waagent[1687]: pkts bytes target prot opt in out source destination Jul 2 07:03:47.309362 waagent[1687]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 Jul 2 07:03:47.309362 waagent[1687]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 Jul 2 07:03:47.309362 waagent[1687]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW Jul 2 07:03:47.309788 waagent[1687]: 2024-07-02T07:03:47.309607Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 Jul 2 07:03:47.314727 waagent[1687]: 2024-07-02T07:03:47.314672Z INFO MonitorHandler ExtHandler Network interfaces: Jul 2 07:03:47.314727 waagent[1687]: Executing ['ip', '-a', '-o', 'link']: Jul 2 07:03:47.314727 waagent[1687]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 Jul 2 07:03:47.314727 waagent[1687]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:69:55:c9 brd ff:ff:ff:ff:ff:ff Jul 2 07:03:47.314727 waagent[1687]: 3: enP21841s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 00:0d:3a:69:55:c9 brd ff:ff:ff:ff:ff:ff\ altname enP21841p0s2 Jul 2 07:03:47.314727 waagent[1687]: Executing ['ip', '-4', '-a', '-o', 'address']: Jul 2 07:03:47.314727 waagent[1687]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever Jul 2 07:03:47.314727 waagent[1687]: 2: eth0 inet 10.200.8.10/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever Jul 2 07:03:47.314727 waagent[1687]: Executing ['ip', '-6', '-a', '-o', 'address']: Jul 2 07:03:47.314727 waagent[1687]: 1: lo inet6 ::1/128 scope host \ valid_lft forever preferred_lft forever Jul 2 07:03:47.314727 waagent[1687]: 2: eth0 inet6 fe80::20d:3aff:fe69:55c9/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever Jul 2 07:03:51.316161 kernel: hv_balloon: Max. dynamic memory size: 8192 MB Jul 2 07:03:56.586954 update_engine[1501]: I0702 07:03:56.586848 1501 update_attempter.cc:509] Updating boot flags... Jul 2 07:03:56.652842 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1912) Jul 2 07:03:56.743769 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 35 scanned by (udev-worker) (1911) Jul 2 07:03:56.798933 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jul 2 07:03:56.799242 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:56.808172 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:03:56.896899 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:03:57.392565 kubelet[1970]: E0702 07:03:57.392501 1970 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:03:57.394309 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:03:57.394479 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:07.521512 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jul 2 07:04:07.521880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:07.529200 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:07.643968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:07.687317 kubelet[1983]: E0702 07:04:07.687271 1983 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:07.689195 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:07.689365 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:17.771601 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. Jul 2 07:04:17.771965 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:17.782311 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:17.875968 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:17.919046 kubelet[1993]: E0702 07:04:17.918990 1993 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:17.920793 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:17.920947 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:18.694873 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jul 2 07:04:18.702313 systemd[1]: Started sshd@0-10.200.8.10:22-10.200.16.10:43626.service - OpenSSH per-connection server daemon (10.200.16.10:43626). Jul 2 07:04:20.131375 sshd[2000]: Accepted publickey for core from 10.200.16.10 port 43626 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:04:20.133140 sshd[2000]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:20.137645 systemd-logind[1500]: New session 3 of user core. Jul 2 07:04:20.141946 systemd[1]: Started session-3.scope - Session 3 of User core. Jul 2 07:04:20.698563 systemd[1]: Started sshd@1-10.200.8.10:22-10.200.16.10:43628.service - OpenSSH per-connection server daemon (10.200.16.10:43628). Jul 2 07:04:21.347177 sshd[2005]: Accepted publickey for core from 10.200.16.10 port 43628 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:04:21.348887 sshd[2005]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:21.353642 systemd-logind[1500]: New session 4 of user core. Jul 2 07:04:21.361931 systemd[1]: Started session-4.scope - Session 4 of User core. Jul 2 07:04:21.806932 sshd[2005]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:21.809968 systemd[1]: sshd@1-10.200.8.10:22-10.200.16.10:43628.service: Deactivated successfully. Jul 2 07:04:21.810816 systemd[1]: session-4.scope: Deactivated successfully. Jul 2 07:04:21.811478 systemd-logind[1500]: Session 4 logged out. Waiting for processes to exit. Jul 2 07:04:21.812280 systemd-logind[1500]: Removed session 4. Jul 2 07:04:21.925333 systemd[1]: Started sshd@2-10.200.8.10:22-10.200.16.10:43632.service - OpenSSH per-connection server daemon (10.200.16.10:43632). Jul 2 07:04:22.561634 sshd[2011]: Accepted publickey for core from 10.200.16.10 port 43632 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:04:22.563508 sshd[2011]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:22.568326 systemd-logind[1500]: New session 5 of user core. Jul 2 07:04:22.575939 systemd[1]: Started session-5.scope - Session 5 of User core. Jul 2 07:04:23.012840 sshd[2011]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:23.015908 systemd[1]: sshd@2-10.200.8.10:22-10.200.16.10:43632.service: Deactivated successfully. Jul 2 07:04:23.016720 systemd[1]: session-5.scope: Deactivated successfully. Jul 2 07:04:23.017352 systemd-logind[1500]: Session 5 logged out. Waiting for processes to exit. Jul 2 07:04:23.018170 systemd-logind[1500]: Removed session 5. Jul 2 07:04:23.128284 systemd[1]: Started sshd@3-10.200.8.10:22-10.200.16.10:43642.service - OpenSSH per-connection server daemon (10.200.16.10:43642). Jul 2 07:04:23.769358 sshd[2017]: Accepted publickey for core from 10.200.16.10 port 43642 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:04:23.771075 sshd[2017]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:23.776521 systemd-logind[1500]: New session 6 of user core. Jul 2 07:04:23.786923 systemd[1]: Started session-6.scope - Session 6 of User core. Jul 2 07:04:24.229739 sshd[2017]: pam_unix(sshd:session): session closed for user core Jul 2 07:04:24.233172 systemd[1]: sshd@3-10.200.8.10:22-10.200.16.10:43642.service: Deactivated successfully. Jul 2 07:04:24.234147 systemd[1]: session-6.scope: Deactivated successfully. Jul 2 07:04:24.234986 systemd-logind[1500]: Session 6 logged out. Waiting for processes to exit. Jul 2 07:04:24.235939 systemd-logind[1500]: Removed session 6. Jul 2 07:04:24.353416 systemd[1]: Started sshd@4-10.200.8.10:22-10.200.16.10:43652.service - OpenSSH per-connection server daemon (10.200.16.10:43652). Jul 2 07:04:24.995741 sshd[2023]: Accepted publickey for core from 10.200.16.10 port 43652 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:04:24.997476 sshd[2023]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:04:25.002725 systemd-logind[1500]: New session 7 of user core. Jul 2 07:04:25.008932 systemd[1]: Started session-7.scope - Session 7 of User core. Jul 2 07:04:25.459957 sudo[2026]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jul 2 07:04:25.460389 sudo[2026]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=500) Jul 2 07:04:25.924344 systemd[1]: Starting docker.service - Docker Application Container Engine... Jul 2 07:04:26.998773 dockerd[2035]: time="2024-07-02T07:04:26.998692510Z" level=info msg="Starting up" Jul 2 07:04:27.055847 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1025876877-merged.mount: Deactivated successfully. Jul 2 07:04:27.152620 dockerd[2035]: time="2024-07-02T07:04:27.152572129Z" level=info msg="Loading containers: start." Jul 2 07:04:27.343770 kernel: Initializing XFRM netlink socket Jul 2 07:04:27.461853 systemd-networkd[1273]: docker0: Link UP Jul 2 07:04:27.480465 dockerd[2035]: time="2024-07-02T07:04:27.480419948Z" level=info msg="Loading containers: done." Jul 2 07:04:27.932356 dockerd[2035]: time="2024-07-02T07:04:27.932299614Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jul 2 07:04:27.932582 dockerd[2035]: time="2024-07-02T07:04:27.932557329Z" level=info msg="Docker daemon" commit=fca702de7f71362c8d103073c7e4a1d0a467fadd graphdriver=overlay2 version=24.0.9 Jul 2 07:04:27.932711 dockerd[2035]: time="2024-07-02T07:04:27.932689137Z" level=info msg="Daemon has completed initialization" Jul 2 07:04:27.981809 dockerd[2035]: time="2024-07-02T07:04:27.981721141Z" level=info msg="API listen on /run/docker.sock" Jul 2 07:04:27.982523 systemd[1]: Started docker.service - Docker Application Container Engine. Jul 2 07:04:27.983640 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 7. Jul 2 07:04:27.983880 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:27.988217 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:28.159583 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:28.204922 kubelet[2158]: E0702 07:04:28.204804 2158 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:28.206977 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:28.207165 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:30.439269 containerd[1511]: time="2024-07-02T07:04:30.439215654Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\"" Jul 2 07:04:31.133486 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2158830664.mount: Deactivated successfully. Jul 2 07:04:33.707627 containerd[1511]: time="2024-07-02T07:04:33.707565669Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:33.709512 containerd[1511]: time="2024-07-02T07:04:33.709455565Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.29.6: active requests=0, bytes read=35235845" Jul 2 07:04:33.714350 containerd[1511]: time="2024-07-02T07:04:33.714311311Z" level=info msg="ImageCreate event name:\"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:33.718731 containerd[1511]: time="2024-07-02T07:04:33.718696733Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-apiserver:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:33.723165 containerd[1511]: time="2024-07-02T07:04:33.723125157Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:33.724236 containerd[1511]: time="2024-07-02T07:04:33.724196511Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.29.6\" with image id \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\", repo tag \"registry.k8s.io/kube-apiserver:v1.29.6\", repo digest \"registry.k8s.io/kube-apiserver@sha256:f4d993b3d73cc0d59558be584b5b40785b4a96874bc76873b69d1dd818485e70\", size \"35232637\" in 3.284934756s" Jul 2 07:04:33.724388 containerd[1511]: time="2024-07-02T07:04:33.724359520Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.29.6\" returns image reference \"sha256:3af2ab51e136465590d968a2052e02e180fc7967a03724b269c1337e8f09d36f\"" Jul 2 07:04:33.746347 containerd[1511]: time="2024-07-02T07:04:33.746305231Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\"" Jul 2 07:04:36.215599 containerd[1511]: time="2024-07-02T07:04:36.215539679Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:36.217830 containerd[1511]: time="2024-07-02T07:04:36.217743883Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.29.6: active requests=0, bytes read=32069755" Jul 2 07:04:36.221054 containerd[1511]: time="2024-07-02T07:04:36.221018636Z" level=info msg="ImageCreate event name:\"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:36.225123 containerd[1511]: time="2024-07-02T07:04:36.225090327Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-controller-manager:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:36.233145 containerd[1511]: time="2024-07-02T07:04:36.233114404Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:36.234232 containerd[1511]: time="2024-07-02T07:04:36.234188854Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.29.6\" with image id \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\", repo tag \"registry.k8s.io/kube-controller-manager:v1.29.6\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:692fc3f88a60b3afc76492ad347306d34042000f56f230959e9367fd59c48b1e\", size \"33590639\" in 2.487840222s" Jul 2 07:04:36.234319 containerd[1511]: time="2024-07-02T07:04:36.234237356Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.29.6\" returns image reference \"sha256:083b81fc09e858d3e0d4b42f567a9d44a2232b60bac396a94cbdd7ce1098235e\"" Jul 2 07:04:36.256128 containerd[1511]: time="2024-07-02T07:04:36.256096382Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\"" Jul 2 07:04:37.915430 containerd[1511]: time="2024-07-02T07:04:37.915311881Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:37.917584 containerd[1511]: time="2024-07-02T07:04:37.917530282Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.29.6: active requests=0, bytes read=17153811" Jul 2 07:04:37.922857 containerd[1511]: time="2024-07-02T07:04:37.922823724Z" level=info msg="ImageCreate event name:\"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:37.927620 containerd[1511]: time="2024-07-02T07:04:37.927589242Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-scheduler:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:37.935325 containerd[1511]: time="2024-07-02T07:04:37.935294295Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:37.936326 containerd[1511]: time="2024-07-02T07:04:37.936285240Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.29.6\" with image id \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\", repo tag \"registry.k8s.io/kube-scheduler:v1.29.6\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b91a4e45debd0d5336d9f533aefdf47d4b39b24071feb459e521709b9e4ec24f\", size \"18674713\" in 1.680151557s" Jul 2 07:04:37.936413 containerd[1511]: time="2024-07-02T07:04:37.936331242Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.29.6\" returns image reference \"sha256:49d9b8328a8fda6ebca6b3226c6d722d92ec7adffff18668511a88058444cf15\"" Jul 2 07:04:37.956936 containerd[1511]: time="2024-07-02T07:04:37.956899883Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\"" Jul 2 07:04:38.272006 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 8. Jul 2 07:04:38.272315 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:38.279240 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:38.371474 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:38.924847 kubelet[2255]: E0702 07:04:38.924789 2255 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:38.926522 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:38.926693 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:42.193419 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1498042696.mount: Deactivated successfully. Jul 2 07:04:42.656778 containerd[1511]: time="2024-07-02T07:04:42.656711129Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:42.658614 containerd[1511]: time="2024-07-02T07:04:42.658550103Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.29.6: active requests=0, bytes read=28409342" Jul 2 07:04:42.662329 containerd[1511]: time="2024-07-02T07:04:42.662290455Z" level=info msg="ImageCreate event name:\"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:42.666111 containerd[1511]: time="2024-07-02T07:04:42.666077608Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/kube-proxy:v1.29.6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:42.669046 containerd[1511]: time="2024-07-02T07:04:42.669015027Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:42.669694 containerd[1511]: time="2024-07-02T07:04:42.669644652Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.29.6\" with image id \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\", repo tag \"registry.k8s.io/kube-proxy:v1.29.6\", repo digest \"registry.k8s.io/kube-proxy@sha256:88bacb3e1d6c0c37c6da95c6d6b8e30531d0b4d0ab540cc290b0af51fbfebd90\", size \"28408353\" in 4.712704668s" Jul 2 07:04:42.669819 containerd[1511]: time="2024-07-02T07:04:42.669699754Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.29.6\" returns image reference \"sha256:9c49592198fa15b509fe4ee4a538067866776e325d6dd33c77ad6647e1d3aac9\"" Jul 2 07:04:42.691149 containerd[1511]: time="2024-07-02T07:04:42.691114521Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" Jul 2 07:04:43.250326 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount490463879.mount: Deactivated successfully. Jul 2 07:04:44.780973 containerd[1511]: time="2024-07-02T07:04:44.780862518Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:44.785165 containerd[1511]: time="2024-07-02T07:04:44.785110581Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" Jul 2 07:04:44.789859 containerd[1511]: time="2024-07-02T07:04:44.789827663Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:44.794927 containerd[1511]: time="2024-07-02T07:04:44.794893459Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:44.800014 containerd[1511]: time="2024-07-02T07:04:44.799981055Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:44.801068 containerd[1511]: time="2024-07-02T07:04:44.801028895Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 2.109839172s" Jul 2 07:04:44.801151 containerd[1511]: time="2024-07-02T07:04:44.801075397Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" Jul 2 07:04:44.822217 containerd[1511]: time="2024-07-02T07:04:44.822178111Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" Jul 2 07:04:45.494669 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2757783666.mount: Deactivated successfully. Jul 2 07:04:45.515789 containerd[1511]: time="2024-07-02T07:04:45.515725004Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.517828 containerd[1511]: time="2024-07-02T07:04:45.517769881Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" Jul 2 07:04:45.523180 containerd[1511]: time="2024-07-02T07:04:45.523142983Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.526511 containerd[1511]: time="2024-07-02T07:04:45.526478509Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.531110 containerd[1511]: time="2024-07-02T07:04:45.531076982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:45.531858 containerd[1511]: time="2024-07-02T07:04:45.531814510Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 709.597197ms" Jul 2 07:04:45.531962 containerd[1511]: time="2024-07-02T07:04:45.531863911Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" Jul 2 07:04:45.553362 containerd[1511]: time="2024-07-02T07:04:45.553323419Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\"" Jul 2 07:04:49.021490 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 9. Jul 2 07:04:49.021836 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:49.029329 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:50.661536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:51.186293 kubelet[2335]: E0702 07:04:51.186224 2335 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jul 2 07:04:51.188061 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jul 2 07:04:51.188231 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jul 2 07:04:52.629642 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1441333558.mount: Deactivated successfully. Jul 2 07:04:55.726282 containerd[1511]: time="2024-07-02T07:04:55.726221677Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:55.728379 containerd[1511]: time="2024-07-02T07:04:55.728320940Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.10-0: active requests=0, bytes read=56651633" Jul 2 07:04:55.731067 containerd[1511]: time="2024-07-02T07:04:55.731004321Z" level=info msg="ImageCreate event name:\"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:55.736658 containerd[1511]: time="2024-07-02T07:04:55.736611889Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/etcd:3.5.10-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:55.740774 containerd[1511]: time="2024-07-02T07:04:55.740723312Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:04:55.741886 containerd[1511]: time="2024-07-02T07:04:55.741847446Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.10-0\" with image id \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\", repo tag \"registry.k8s.io/etcd:3.5.10-0\", repo digest \"registry.k8s.io/etcd@sha256:22f892d7672adc0b9c86df67792afdb8b2dc08880f49f669eaaa59c47d7908c2\", size \"56649232\" in 10.188436724s" Jul 2 07:04:55.742041 containerd[1511]: time="2024-07-02T07:04:55.742016051Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.10-0\" returns image reference \"sha256:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7\"" Jul 2 07:04:58.791651 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:58.800278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:58.830057 systemd[1]: Reloading. Jul 2 07:04:59.050092 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:04:59.150658 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:59.156557 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:04:59.157653 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:04:59.157929 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:04:59.159914 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:05:00.299489 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:05:00.351610 kubelet[2542]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:00.352094 kubelet[2542]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:05:00.352162 kubelet[2542]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:00.352390 kubelet[2542]: I0702 07:05:00.352330 2542 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:05:01.976338 kubelet[2542]: I0702 07:05:01.976295 2542 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:05:01.976338 kubelet[2542]: I0702 07:05:01.976329 2542 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:05:01.976937 kubelet[2542]: I0702 07:05:01.976688 2542 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:05:01.995957 kubelet[2542]: E0702 07:05:01.995925 2542 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:01.997170 kubelet[2542]: I0702 07:05:01.997121 2542 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:02.005819 kubelet[2542]: I0702 07:05:02.005794 2542 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:05:02.007077 kubelet[2542]: I0702 07:05:02.007049 2542 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:05:02.007560 kubelet[2542]: I0702 07:05:02.007530 2542 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:05:02.011184 kubelet[2542]: I0702 07:05:02.011155 2542 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:05:02.011264 kubelet[2542]: I0702 07:05:02.011191 2542 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:05:02.011334 kubelet[2542]: I0702 07:05:02.011319 2542 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:02.011457 kubelet[2542]: I0702 07:05:02.011444 2542 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:05:02.011507 kubelet[2542]: I0702 07:05:02.011469 2542 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:05:02.011507 kubelet[2542]: I0702 07:05:02.011501 2542 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:05:02.011581 kubelet[2542]: I0702 07:05:02.011520 2542 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:05:02.014843 kubelet[2542]: W0702 07:05:02.014798 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.014974 kubelet[2542]: E0702 07:05:02.014962 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.015142 kubelet[2542]: I0702 07:05:02.015130 2542 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:05:02.018671 kubelet[2542]: I0702 07:05:02.018651 2542 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:05:02.019845 kubelet[2542]: W0702 07:05:02.019825 2542 probe.go:268] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jul 2 07:05:02.021680 kubelet[2542]: W0702 07:05:02.021634 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.021782 kubelet[2542]: E0702 07:05:02.021689 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.021869 kubelet[2542]: I0702 07:05:02.021854 2542 server.go:1256] "Started kubelet" Jul 2 07:05:02.023393 kubelet[2542]: I0702 07:05:02.023367 2542 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:05:02.030348 kubelet[2542]: I0702 07:05:02.029639 2542 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:05:02.030566 kubelet[2542]: E0702 07:05:02.030547 2542 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-a-54ab6c74aa.17de537c87cf16d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-a-54ab6c74aa,UID:ci-3815.2.5-a-54ab6c74aa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-54ab6c74aa,},FirstTimestamp:2024-07-02 07:05:02.021654226 +0000 UTC m=+1.713422986,LastTimestamp:2024-07-02 07:05:02.021654226 +0000 UTC m=+1.713422986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-54ab6c74aa,}" Jul 2 07:05:02.030717 kubelet[2542]: I0702 07:05:02.030635 2542 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:05:02.032098 kubelet[2542]: I0702 07:05:02.030684 2542 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:05:02.032410 kubelet[2542]: I0702 07:05:02.032396 2542 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:05:02.032896 kubelet[2542]: I0702 07:05:02.032882 2542 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:05:02.033128 kubelet[2542]: I0702 07:05:02.033114 2542 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:05:02.033275 kubelet[2542]: I0702 07:05:02.033264 2542 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:05:02.033799 kubelet[2542]: W0702 07:05:02.033758 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.033922 kubelet[2542]: E0702 07:05:02.033910 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.034446 kubelet[2542]: E0702 07:05:02.034430 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-54ab6c74aa?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="200ms" Jul 2 07:05:02.034668 kubelet[2542]: E0702 07:05:02.034652 2542 kubelet.go:1462] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jul 2 07:05:02.035122 kubelet[2542]: I0702 07:05:02.035105 2542 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:05:02.035310 kubelet[2542]: I0702 07:05:02.035290 2542 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:05:02.036869 kubelet[2542]: I0702 07:05:02.036851 2542 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:05:02.140035 kubelet[2542]: I0702 07:05:02.140004 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:05:02.141974 kubelet[2542]: I0702 07:05:02.141949 2542 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:05:02.142152 kubelet[2542]: I0702 07:05:02.142138 2542 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:05:02.142255 kubelet[2542]: I0702 07:05:02.142243 2542 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:05:02.142403 kubelet[2542]: E0702 07:05:02.142389 2542 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:05:02.145283 kubelet[2542]: I0702 07:05:02.145255 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.145623 kubelet[2542]: W0702 07:05:02.145573 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.145705 kubelet[2542]: E0702 07:05:02.145635 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.146921 kubelet[2542]: E0702 07:05:02.146903 2542 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.148042 kubelet[2542]: I0702 07:05:02.148023 2542 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:05:02.148156 kubelet[2542]: I0702 07:05:02.148145 2542 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:05:02.148245 kubelet[2542]: I0702 07:05:02.148236 2542 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:02.154137 kubelet[2542]: I0702 07:05:02.154113 2542 policy_none.go:49] "None policy: Start" Jul 2 07:05:02.154714 kubelet[2542]: I0702 07:05:02.154700 2542 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:05:02.154826 kubelet[2542]: I0702 07:05:02.154776 2542 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:05:02.162510 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jul 2 07:05:02.171655 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jul 2 07:05:02.174784 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jul 2 07:05:02.185500 kubelet[2542]: I0702 07:05:02.185473 2542 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:05:02.186925 kubelet[2542]: I0702 07:05:02.185782 2542 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:05:02.188708 kubelet[2542]: E0702 07:05:02.188691 2542 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:02.235308 kubelet[2542]: E0702 07:05:02.235178 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-54ab6c74aa?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="400ms" Jul 2 07:05:02.243421 kubelet[2542]: I0702 07:05:02.243380 2542 topology_manager.go:215] "Topology Admit Handler" podUID="9cf91b813f0f759081652318b3434bdf" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.245195 kubelet[2542]: I0702 07:05:02.245167 2542 topology_manager.go:215] "Topology Admit Handler" podUID="ac5e7b9b02f93de61850ff1926b6e375" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.246555 kubelet[2542]: I0702 07:05:02.246535 2542 topology_manager.go:215] "Topology Admit Handler" podUID="c06f2aa139a9463f99b8ffef2d9d63a2" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.252970 systemd[1]: Created slice kubepods-burstable-pod9cf91b813f0f759081652318b3434bdf.slice - libcontainer container kubepods-burstable-pod9cf91b813f0f759081652318b3434bdf.slice. Jul 2 07:05:02.264291 systemd[1]: Created slice kubepods-burstable-podac5e7b9b02f93de61850ff1926b6e375.slice - libcontainer container kubepods-burstable-podac5e7b9b02f93de61850ff1926b6e375.slice. Jul 2 07:05:02.268954 systemd[1]: Created slice kubepods-burstable-podc06f2aa139a9463f99b8ffef2d9d63a2.slice - libcontainer container kubepods-burstable-podc06f2aa139a9463f99b8ffef2d9d63a2.slice. Jul 2 07:05:02.334987 kubelet[2542]: I0702 07:05:02.334935 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.334987 kubelet[2542]: I0702 07:05:02.334996 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335243 kubelet[2542]: I0702 07:05:02.335033 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06f2aa139a9463f99b8ffef2d9d63a2-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-a-54ab6c74aa\" (UID: \"c06f2aa139a9463f99b8ffef2d9d63a2\") " pod="kube-system/kube-scheduler-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335243 kubelet[2542]: I0702 07:05:02.335063 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335243 kubelet[2542]: I0702 07:05:02.335093 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335243 kubelet[2542]: I0702 07:05:02.335129 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335243 kubelet[2542]: I0702 07:05:02.335183 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335519 kubelet[2542]: I0702 07:05:02.335225 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.335519 kubelet[2542]: I0702 07:05:02.335261 2542 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.349908 kubelet[2542]: I0702 07:05:02.349877 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.350320 kubelet[2542]: E0702 07:05:02.350287 2542 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.564695 containerd[1511]: time="2024-07-02T07:05:02.564637133Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-a-54ab6c74aa,Uid:9cf91b813f0f759081652318b3434bdf,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:02.568492 containerd[1511]: time="2024-07-02T07:05:02.568311629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-a-54ab6c74aa,Uid:ac5e7b9b02f93de61850ff1926b6e375,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:02.572404 containerd[1511]: time="2024-07-02T07:05:02.572121028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-a-54ab6c74aa,Uid:c06f2aa139a9463f99b8ffef2d9d63a2,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:02.636445 kubelet[2542]: E0702 07:05:02.636405 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-54ab6c74aa?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="800ms" Jul 2 07:05:02.753184 kubelet[2542]: I0702 07:05:02.753119 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.753556 kubelet[2542]: E0702 07:05:02.753529 2542 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:02.871690 kubelet[2542]: W0702 07:05:02.871580 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:02.871690 kubelet[2542]: E0702 07:05:02.871623 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.10:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.219680 kubelet[2542]: W0702 07:05:03.219563 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.219680 kubelet[2542]: E0702 07:05:03.219612 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.249011 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount410566836.mount: Deactivated successfully. Jul 2 07:05:03.279276 containerd[1511]: time="2024-07-02T07:05:03.279223756Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.282220 containerd[1511]: time="2024-07-02T07:05:03.282166431Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=312064" Jul 2 07:05:03.285072 containerd[1511]: time="2024-07-02T07:05:03.285035004Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.288014 containerd[1511]: time="2024-07-02T07:05:03.287968579Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:05:03.292281 containerd[1511]: time="2024-07-02T07:05:03.292248888Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.295170 containerd[1511]: time="2024-07-02T07:05:03.295137462Z" level=info msg="ImageCreate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.300257 containerd[1511]: time="2024-07-02T07:05:03.300220391Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.301973 containerd[1511]: time="2024-07-02T07:05:03.301930135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jul 2 07:05:03.305518 containerd[1511]: time="2024-07-02T07:05:03.305481625Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.308807 containerd[1511]: time="2024-07-02T07:05:03.308776309Z" level=info msg="ImageUpdate event name:\"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.313284 containerd[1511]: time="2024-07-02T07:05:03.313248623Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.314109 containerd[1511]: time="2024-07-02T07:05:03.314076344Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 741.852614ms" Jul 2 07:05:03.317357 kubelet[2542]: W0702 07:05:03.317326 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.317454 kubelet[2542]: E0702 07:05:03.317367 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.10:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.320274 containerd[1511]: time="2024-07-02T07:05:03.320238001Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.321150 containerd[1511]: time="2024-07-02T07:05:03.321118624Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.326601 containerd[1511]: time="2024-07-02T07:05:03.326556862Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.327351 containerd[1511]: time="2024-07-02T07:05:03.327314381Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 762.517044ms" Jul 2 07:05:03.332015 containerd[1511]: time="2024-07-02T07:05:03.331980800Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jul 2 07:05:03.333478 containerd[1511]: time="2024-07-02T07:05:03.333430537Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4873874c08efc72e9729683a83ffbb7502ee729e9a5ac097723806ea7fa13517\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"311286\" in 764.874002ms" Jul 2 07:05:03.437477 kubelet[2542]: E0702 07:05:03.437435 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-54ab6c74aa?timeout=10s\": dial tcp 10.200.8.10:6443: connect: connection refused" interval="1.6s" Jul 2 07:05:03.555966 kubelet[2542]: I0702 07:05:03.555933 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:03.556321 kubelet[2542]: E0702 07:05:03.556298 2542 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.10:6443/api/v1/nodes\": dial tcp 10.200.8.10:6443: connect: connection refused" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:03.575907 kubelet[2542]: W0702 07:05:03.575876 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.576666 kubelet[2542]: E0702 07:05:03.576157 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.10:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:03.764440 kubelet[2542]: E0702 07:05:03.764399 2542 event.go:355] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.10:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.10:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-3815.2.5-a-54ab6c74aa.17de537c87cf16d2 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-3815.2.5-a-54ab6c74aa,UID:ci-3815.2.5-a-54ab6c74aa,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-54ab6c74aa,},FirstTimestamp:2024-07-02 07:05:02.021654226 +0000 UTC m=+1.713422986,LastTimestamp:2024-07-02 07:05:02.021654226 +0000 UTC m=+1.713422986,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-54ab6c74aa,}" Jul 2 07:05:04.048081 kubelet[2542]: E0702 07:05:04.048046 2542 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.10:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:04.521906 containerd[1511]: time="2024-07-02T07:05:04.521807451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:04.522359 containerd[1511]: time="2024-07-02T07:05:04.521885053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.522359 containerd[1511]: time="2024-07-02T07:05:04.521908554Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:04.522359 containerd[1511]: time="2024-07-02T07:05:04.521925854Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.531105 containerd[1511]: time="2024-07-02T07:05:04.531024381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:04.531410 containerd[1511]: time="2024-07-02T07:05:04.531350589Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.531944 containerd[1511]: time="2024-07-02T07:05:04.531882203Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:04.532143 containerd[1511]: time="2024-07-02T07:05:04.532103208Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.532311 containerd[1511]: time="2024-07-02T07:05:04.532270112Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:04.532450 containerd[1511]: time="2024-07-02T07:05:04.532424216Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.532545 containerd[1511]: time="2024-07-02T07:05:04.531669397Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:04.532545 containerd[1511]: time="2024-07-02T07:05:04.532517719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:04.583968 systemd[1]: Started cri-containerd-0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b.scope - libcontainer container 0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b. Jul 2 07:05:04.597935 systemd[1]: Started cri-containerd-40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d.scope - libcontainer container 40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d. Jul 2 07:05:04.601064 systemd[1]: Started cri-containerd-ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d.scope - libcontainer container ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d. Jul 2 07:05:04.665382 containerd[1511]: time="2024-07-02T07:05:04.665328436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-3815.2.5-a-54ab6c74aa,Uid:9cf91b813f0f759081652318b3434bdf,Namespace:kube-system,Attempt:0,} returns sandbox id \"40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d\"" Jul 2 07:05:04.674390 containerd[1511]: time="2024-07-02T07:05:04.674349061Z" level=info msg="CreateContainer within sandbox \"40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jul 2 07:05:04.691816 containerd[1511]: time="2024-07-02T07:05:04.691707995Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-3815.2.5-a-54ab6c74aa,Uid:ac5e7b9b02f93de61850ff1926b6e375,Namespace:kube-system,Attempt:0,} returns sandbox id \"0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b\"" Jul 2 07:05:04.693285 containerd[1511]: time="2024-07-02T07:05:04.692848523Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-3815.2.5-a-54ab6c74aa,Uid:c06f2aa139a9463f99b8ffef2d9d63a2,Namespace:kube-system,Attempt:0,} returns sandbox id \"ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d\"" Jul 2 07:05:04.696043 containerd[1511]: time="2024-07-02T07:05:04.696005902Z" level=info msg="CreateContainer within sandbox \"0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jul 2 07:05:04.696227 containerd[1511]: time="2024-07-02T07:05:04.696048303Z" level=info msg="CreateContainer within sandbox \"ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jul 2 07:05:04.733713 containerd[1511]: time="2024-07-02T07:05:04.733660843Z" level=info msg="CreateContainer within sandbox \"40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"508111ddb7ec96b19911cdd51a0e2cea4257b9ab98ce5a1a289940dbbc6be9b5\"" Jul 2 07:05:04.734418 containerd[1511]: time="2024-07-02T07:05:04.734375860Z" level=info msg="StartContainer for \"508111ddb7ec96b19911cdd51a0e2cea4257b9ab98ce5a1a289940dbbc6be9b5\"" Jul 2 07:05:04.762224 containerd[1511]: time="2024-07-02T07:05:04.761732244Z" level=info msg="CreateContainer within sandbox \"ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225\"" Jul 2 07:05:04.762224 containerd[1511]: time="2024-07-02T07:05:04.762210756Z" level=info msg="StartContainer for \"4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225\"" Jul 2 07:05:04.763952 systemd[1]: Started cri-containerd-508111ddb7ec96b19911cdd51a0e2cea4257b9ab98ce5a1a289940dbbc6be9b5.scope - libcontainer container 508111ddb7ec96b19911cdd51a0e2cea4257b9ab98ce5a1a289940dbbc6be9b5. Jul 2 07:05:04.782320 containerd[1511]: time="2024-07-02T07:05:04.781025726Z" level=info msg="CreateContainer within sandbox \"0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e\"" Jul 2 07:05:04.786418 containerd[1511]: time="2024-07-02T07:05:04.786379559Z" level=info msg="StartContainer for \"41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e\"" Jul 2 07:05:04.800975 systemd[1]: Started cri-containerd-4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225.scope - libcontainer container 4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225. Jul 2 07:05:04.843987 systemd[1]: Started cri-containerd-41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e.scope - libcontainer container 41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e. Jul 2 07:05:04.845863 containerd[1511]: time="2024-07-02T07:05:04.845812344Z" level=info msg="StartContainer for \"508111ddb7ec96b19911cdd51a0e2cea4257b9ab98ce5a1a289940dbbc6be9b5\" returns successfully" Jul 2 07:05:04.852480 kubelet[2542]: W0702 07:05:04.852362 2542 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:04.852480 kubelet[2542]: E0702 07:05:04.852439 2542 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.10:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-3815.2.5-a-54ab6c74aa&limit=500&resourceVersion=0": dial tcp 10.200.8.10:6443: connect: connection refused Jul 2 07:05:04.895168 containerd[1511]: time="2024-07-02T07:05:04.895113975Z" level=info msg="StartContainer for \"4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225\" returns successfully" Jul 2 07:05:04.958394 containerd[1511]: time="2024-07-02T07:05:04.958346054Z" level=info msg="StartContainer for \"41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e\" returns successfully" Jul 2 07:05:05.158426 kubelet[2542]: I0702 07:05:05.158311 2542 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:05.529384 systemd[1]: run-containerd-runc-k8s.io-0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b-runc.iyvYqi.mount: Deactivated successfully. Jul 2 07:05:05.529714 systemd[1]: run-containerd-runc-k8s.io-40578adcd64fedfad6081dbc51e3ac8fb06c8f1dce6ac9c68e0062e40fb0cc3d-runc.Zbn8Hg.mount: Deactivated successfully. Jul 2 07:05:07.015574 kubelet[2542]: I0702 07:05:07.015538 2542 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:07.054114 kubelet[2542]: E0702 07:05:07.054052 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.154704 kubelet[2542]: E0702 07:05:07.154630 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.156498 kubelet[2542]: E0702 07:05:07.156460 2542 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jul 2 07:05:07.255124 kubelet[2542]: E0702 07:05:07.255066 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.355828 kubelet[2542]: E0702 07:05:07.355782 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.456376 kubelet[2542]: E0702 07:05:07.456325 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.557424 kubelet[2542]: E0702 07:05:07.557373 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.657959 kubelet[2542]: E0702 07:05:07.657825 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.758771 kubelet[2542]: E0702 07:05:07.758717 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.859405 kubelet[2542]: E0702 07:05:07.859359 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:07.959968 kubelet[2542]: E0702 07:05:07.959851 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.060624 kubelet[2542]: E0702 07:05:08.060567 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.161684 kubelet[2542]: E0702 07:05:08.161633 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.262374 kubelet[2542]: E0702 07:05:08.262262 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.362946 kubelet[2542]: E0702 07:05:08.362896 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.463477 kubelet[2542]: E0702 07:05:08.463416 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.564229 kubelet[2542]: E0702 07:05:08.564189 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.664803 kubelet[2542]: E0702 07:05:08.664761 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.765504 kubelet[2542]: E0702 07:05:08.765456 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.866190 kubelet[2542]: E0702 07:05:08.866063 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:08.966557 kubelet[2542]: E0702 07:05:08.966514 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.067446 kubelet[2542]: E0702 07:05:09.067397 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.168235 kubelet[2542]: E0702 07:05:09.168118 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.268941 kubelet[2542]: E0702 07:05:09.268899 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.369475 kubelet[2542]: E0702 07:05:09.369430 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.470324 kubelet[2542]: E0702 07:05:09.470187 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.570934 kubelet[2542]: E0702 07:05:09.570875 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.671431 kubelet[2542]: E0702 07:05:09.671381 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.771887 kubelet[2542]: E0702 07:05:09.771852 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.872012 kubelet[2542]: E0702 07:05:09.871970 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:09.972880 kubelet[2542]: E0702 07:05:09.972831 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.073898 kubelet[2542]: E0702 07:05:10.073775 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.174706 kubelet[2542]: E0702 07:05:10.174669 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.275543 kubelet[2542]: E0702 07:05:10.275498 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.376594 kubelet[2542]: E0702 07:05:10.376479 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.477025 kubelet[2542]: E0702 07:05:10.476976 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.577697 kubelet[2542]: E0702 07:05:10.577645 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.678269 kubelet[2542]: E0702 07:05:10.678028 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.778667 kubelet[2542]: E0702 07:05:10.778621 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.879227 kubelet[2542]: E0702 07:05:10.879188 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:10.980133 kubelet[2542]: E0702 07:05:10.980012 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:11.080871 kubelet[2542]: E0702 07:05:11.080823 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:11.167881 systemd[1]: Reloading. Jul 2 07:05:11.181298 kubelet[2542]: E0702 07:05:11.181263 2542 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-3815.2.5-a-54ab6c74aa\" not found" Jul 2 07:05:11.374906 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jul 2 07:05:11.497287 kubelet[2542]: I0702 07:05:11.496966 2542 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:11.497097 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:05:11.516634 systemd[1]: kubelet.service: Deactivated successfully. Jul 2 07:05:11.516904 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:05:11.521563 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jul 2 07:05:14.879322 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jul 2 07:05:14.927611 kubelet[2899]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:14.927611 kubelet[2899]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jul 2 07:05:14.928061 kubelet[2899]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jul 2 07:05:14.928061 kubelet[2899]: I0702 07:05:14.927701 2899 server.go:204] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jul 2 07:05:14.932067 kubelet[2899]: I0702 07:05:14.932041 2899 server.go:487] "Kubelet version" kubeletVersion="v1.29.2" Jul 2 07:05:14.932067 kubelet[2899]: I0702 07:05:14.932068 2899 server.go:489] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jul 2 07:05:14.932328 kubelet[2899]: I0702 07:05:14.932307 2899 server.go:919] "Client rotation is on, will bootstrap in background" Jul 2 07:05:14.933653 kubelet[2899]: I0702 07:05:14.933628 2899 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jul 2 07:05:14.935734 kubelet[2899]: I0702 07:05:14.935712 2899 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jul 2 07:05:14.944005 kubelet[2899]: I0702 07:05:14.943988 2899 server.go:745] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jul 2 07:05:14.944325 kubelet[2899]: I0702 07:05:14.944312 2899 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jul 2 07:05:14.944521 kubelet[2899]: I0702 07:05:14.944510 2899 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} Jul 2 07:05:14.944648 kubelet[2899]: I0702 07:05:14.944602 2899 topology_manager.go:138] "Creating topology manager with none policy" Jul 2 07:05:14.944648 kubelet[2899]: I0702 07:05:14.944621 2899 container_manager_linux.go:301] "Creating device plugin manager" Jul 2 07:05:14.944740 kubelet[2899]: I0702 07:05:14.944659 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:14.944815 kubelet[2899]: I0702 07:05:14.944788 2899 kubelet.go:396] "Attempting to sync node with API server" Jul 2 07:05:14.944815 kubelet[2899]: I0702 07:05:14.944806 2899 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" Jul 2 07:05:14.944939 kubelet[2899]: I0702 07:05:14.944928 2899 kubelet.go:312] "Adding apiserver pod source" Jul 2 07:05:14.945011 kubelet[2899]: I0702 07:05:14.945002 2899 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jul 2 07:05:14.945655 kubelet[2899]: I0702 07:05:14.945634 2899 kuberuntime_manager.go:258] "Container runtime initialized" containerRuntime="containerd" version="v1.7.13" apiVersion="v1" Jul 2 07:05:14.945865 kubelet[2899]: I0702 07:05:14.945847 2899 kubelet.go:809] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jul 2 07:05:14.946304 kubelet[2899]: I0702 07:05:14.946283 2899 server.go:1256] "Started kubelet" Jul 2 07:05:14.952648 kubelet[2899]: I0702 07:05:14.952633 2899 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jul 2 07:05:14.954710 kubelet[2899]: I0702 07:05:14.954693 2899 server.go:162] "Starting to listen" address="0.0.0.0" port=10250 Jul 2 07:05:14.955635 kubelet[2899]: I0702 07:05:14.955617 2899 server.go:461] "Adding debug handlers to kubelet server" Jul 2 07:05:14.957395 kubelet[2899]: I0702 07:05:14.957378 2899 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jul 2 07:05:14.957647 kubelet[2899]: I0702 07:05:14.957636 2899 server.go:233] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jul 2 07:05:14.963897 kubelet[2899]: I0702 07:05:14.963873 2899 volume_manager.go:291] "Starting Kubelet Volume Manager" Jul 2 07:05:14.964269 kubelet[2899]: I0702 07:05:14.964247 2899 desired_state_of_world_populator.go:151] "Desired state populator starts to run" Jul 2 07:05:14.964426 kubelet[2899]: I0702 07:05:14.964400 2899 reconciler_new.go:29] "Reconciler: start to sync state" Jul 2 07:05:14.972090 kubelet[2899]: I0702 07:05:14.972072 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jul 2 07:05:14.973513 kubelet[2899]: I0702 07:05:14.973498 2899 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jul 2 07:05:14.973652 kubelet[2899]: I0702 07:05:14.973642 2899 status_manager.go:217] "Starting to sync pod status with apiserver" Jul 2 07:05:14.973726 kubelet[2899]: I0702 07:05:14.973719 2899 kubelet.go:2329] "Starting kubelet main sync loop" Jul 2 07:05:14.973911 kubelet[2899]: E0702 07:05:14.973898 2899 kubelet.go:2353] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:14.981289 2899 factory.go:221] Registration of the containerd container factory successfully Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:14.981303 2899 factory.go:221] Registration of the systemd container factory successfully Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:14.981373 2899 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.022945 2899 cpu_manager.go:214] "Starting CPU manager" policy="none" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.022965 2899 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.022983 2899 state_mem.go:36] "Initialized new in-memory state store" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.067117 2899 kubelet_node_status.go:73] "Attempting to register node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:16.732901 kubelet[2899]: E0702 07:05:15.075067 2899 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.097090 2899 kubelet_node_status.go:112] "Node was previously registered" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:16.732901 kubelet[2899]: E0702 07:05:15.275377 2899 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:05:16.732901 kubelet[2899]: E0702 07:05:15.676067 2899 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:15.951346 2899 apiserver.go:52] "Watching apiserver" Jul 2 07:05:16.732901 kubelet[2899]: E0702 07:05:16.476894 2899 kubelet.go:2353] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:16.731916 2899 kubelet_node_status.go:76] "Successfully registered node" node="ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:16.732428 2899 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:16.732472 2899 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jul 2 07:05:16.732901 kubelet[2899]: I0702 07:05:16.732485 2899 policy_none.go:49] "None policy: Start" Jul 2 07:05:16.735931 kubelet[2899]: I0702 07:05:16.735912 2899 memory_manager.go:170] "Starting memorymanager" policy="None" Jul 2 07:05:16.736098 kubelet[2899]: I0702 07:05:16.736086 2899 state_mem.go:35] "Initializing new in-memory state store" Jul 2 07:05:16.736551 kubelet[2899]: I0702 07:05:16.736528 2899 state_mem.go:75] "Updated machine memory state" Jul 2 07:05:16.741598 kubelet[2899]: I0702 07:05:16.741581 2899 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jul 2 07:05:16.742000 kubelet[2899]: I0702 07:05:16.741985 2899 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jul 2 07:05:17.977549 sudo[2026]: pam_unix(sudo:session): session closed for user root Jul 2 07:05:18.078146 kubelet[2899]: I0702 07:05:18.078068 2899 topology_manager.go:215] "Topology Admit Handler" podUID="ac5e7b9b02f93de61850ff1926b6e375" podNamespace="kube-system" podName="kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.079060 kubelet[2899]: I0702 07:05:18.078287 2899 topology_manager.go:215] "Topology Admit Handler" podUID="c06f2aa139a9463f99b8ffef2d9d63a2" podNamespace="kube-system" podName="kube-scheduler-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.079060 kubelet[2899]: I0702 07:05:18.078417 2899 topology_manager.go:215] "Topology Admit Handler" podUID="9cf91b813f0f759081652318b3434bdf" podNamespace="kube-system" podName="kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.087691 kubelet[2899]: W0702 07:05:18.087666 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:05:18.092350 kubelet[2899]: W0702 07:05:18.092331 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:05:18.092624 kubelet[2899]: W0702 07:05:18.092609 2899 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] Jul 2 07:05:18.094607 sshd[2023]: pam_unix(sshd:session): session closed for user core Jul 2 07:05:18.097129 systemd[1]: sshd@4-10.200.8.10:22-10.200.16.10:43652.service: Deactivated successfully. Jul 2 07:05:18.098359 systemd[1]: session-7.scope: Deactivated successfully. Jul 2 07:05:18.098399 systemd-logind[1500]: Session 7 logged out. Waiting for processes to exit. Jul 2 07:05:18.098533 systemd[1]: session-7.scope: Consumed 3.258s CPU time. Jul 2 07:05:18.099637 systemd-logind[1500]: Removed session 7. Jul 2 07:05:18.165227 kubelet[2899]: I0702 07:05:18.165189 2899 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world" Jul 2 07:05:18.182266 kubelet[2899]: I0702 07:05:18.182238 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-ca-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182379 kubelet[2899]: I0702 07:05:18.182349 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-flexvolume-dir\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182446 kubelet[2899]: I0702 07:05:18.182385 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-k8s-certs\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182446 kubelet[2899]: I0702 07:05:18.182416 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/c06f2aa139a9463f99b8ffef2d9d63a2-kubeconfig\") pod \"kube-scheduler-ci-3815.2.5-a-54ab6c74aa\" (UID: \"c06f2aa139a9463f99b8ffef2d9d63a2\") " pod="kube-system/kube-scheduler-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182545 kubelet[2899]: I0702 07:05:18.182448 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-ca-certs\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182545 kubelet[2899]: I0702 07:05:18.182480 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-kubeconfig\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182545 kubelet[2899]: I0702 07:05:18.182521 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ac5e7b9b02f93de61850ff1926b6e375-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-3815.2.5-a-54ab6c74aa\" (UID: \"ac5e7b9b02f93de61850ff1926b6e375\") " pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182693 kubelet[2899]: I0702 07:05:18.182551 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-k8s-certs\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.182693 kubelet[2899]: I0702 07:05:18.182585 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/9cf91b813f0f759081652318b3434bdf-usr-share-ca-certificates\") pod \"kube-apiserver-ci-3815.2.5-a-54ab6c74aa\" (UID: \"9cf91b813f0f759081652318b3434bdf\") " pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" Jul 2 07:05:18.425517 kubelet[2899]: I0702 07:05:18.425366 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" podStartSLOduration=0.425301441 podStartE2EDuration="425.301441ms" podCreationTimestamp="2024-07-02 07:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:18.418370806 +0000 UTC m=+3.531187288" watchObservedRunningTime="2024-07-02 07:05:18.425301441 +0000 UTC m=+3.538118023" Jul 2 07:05:18.433813 kubelet[2899]: I0702 07:05:18.433776 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-3815.2.5-a-54ab6c74aa" podStartSLOduration=0.433706705 podStartE2EDuration="433.706705ms" podCreationTimestamp="2024-07-02 07:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:18.425652748 +0000 UTC m=+3.538469230" watchObservedRunningTime="2024-07-02 07:05:18.433706705 +0000 UTC m=+3.546523187" Jul 2 07:05:18.442340 kubelet[2899]: I0702 07:05:18.442305 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-3815.2.5-a-54ab6c74aa" podStartSLOduration=0.442263371 podStartE2EDuration="442.263371ms" podCreationTimestamp="2024-07-02 07:05:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:18.434016511 +0000 UTC m=+3.546833093" watchObservedRunningTime="2024-07-02 07:05:18.442263371 +0000 UTC m=+3.555079953" Jul 2 07:05:22.922225 kubelet[2899]: I0702 07:05:22.922192 2899 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jul 2 07:05:22.922777 containerd[1511]: time="2024-07-02T07:05:22.922718777Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jul 2 07:05:22.923101 kubelet[2899]: I0702 07:05:22.922996 2899 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jul 2 07:05:23.707352 kubelet[2899]: I0702 07:05:23.707300 2899 topology_manager.go:215] "Topology Admit Handler" podUID="e5e86011-657c-4ac2-b524-1ce114e266fd" podNamespace="kube-flannel" podName="kube-flannel-ds-7djtm" Jul 2 07:05:23.714466 systemd[1]: Created slice kubepods-burstable-pode5e86011_657c_4ac2_b524_1ce114e266fd.slice - libcontainer container kubepods-burstable-pode5e86011_657c_4ac2_b524_1ce114e266fd.slice. Jul 2 07:05:23.719509 kubelet[2899]: W0702 07:05:23.719462 2899 reflector.go:539] object-"kube-flannel"/"kube-flannel-cfg": failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3815.2.5-a-54ab6c74aa" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-a-54ab6c74aa' and this object Jul 2 07:05:23.719658 kubelet[2899]: E0702 07:05:23.719521 2899 reflector.go:147] object-"kube-flannel"/"kube-flannel-cfg": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-flannel-cfg" is forbidden: User "system:node:ci-3815.2.5-a-54ab6c74aa" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-a-54ab6c74aa' and this object Jul 2 07:05:23.719658 kubelet[2899]: W0702 07:05:23.719586 2899 reflector.go:539] object-"kube-flannel"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-a-54ab6c74aa" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-a-54ab6c74aa' and this object Jul 2 07:05:23.719658 kubelet[2899]: E0702 07:05:23.719602 2899 reflector.go:147] object-"kube-flannel"/"kube-root-ca.crt": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:ci-3815.2.5-a-54ab6c74aa" cannot list resource "configmaps" in API group "" in the namespace "kube-flannel": no relationship found between node 'ci-3815.2.5-a-54ab6c74aa' and this object Jul 2 07:05:23.720085 kubelet[2899]: I0702 07:05:23.720060 2899 topology_manager.go:215] "Topology Admit Handler" podUID="d1cb88b8-1845-44ab-bcdc-544928e6342b" podNamespace="kube-system" podName="kube-proxy-p6h7q" Jul 2 07:05:23.728296 systemd[1]: Created slice kubepods-besteffort-podd1cb88b8_1845_44ab_bcdc_544928e6342b.slice - libcontainer container kubepods-besteffort-podd1cb88b8_1845_44ab_bcdc_544928e6342b.slice. Jul 2 07:05:23.823583 kubelet[2899]: I0702 07:05:23.823531 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/e5e86011-657c-4ac2-b524-1ce114e266fd-cni-plugin\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:23.823914 kubelet[2899]: I0702 07:05:23.823602 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d1cb88b8-1845-44ab-bcdc-544928e6342b-kube-proxy\") pod \"kube-proxy-p6h7q\" (UID: \"d1cb88b8-1845-44ab-bcdc-544928e6342b\") " pod="kube-system/kube-proxy-p6h7q" Jul 2 07:05:23.823914 kubelet[2899]: I0702 07:05:23.823630 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d1cb88b8-1845-44ab-bcdc-544928e6342b-xtables-lock\") pod \"kube-proxy-p6h7q\" (UID: \"d1cb88b8-1845-44ab-bcdc-544928e6342b\") " pod="kube-system/kube-proxy-p6h7q" Jul 2 07:05:23.823914 kubelet[2899]: I0702 07:05:23.823664 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1cb88b8-1845-44ab-bcdc-544928e6342b-lib-modules\") pod \"kube-proxy-p6h7q\" (UID: \"d1cb88b8-1845-44ab-bcdc-544928e6342b\") " pod="kube-system/kube-proxy-p6h7q" Jul 2 07:05:23.823914 kubelet[2899]: I0702 07:05:23.823696 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e5e86011-657c-4ac2-b524-1ce114e266fd-run\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:23.823914 kubelet[2899]: I0702 07:05:23.823721 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/e5e86011-657c-4ac2-b524-1ce114e266fd-flannel-cfg\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:23.824166 kubelet[2899]: I0702 07:05:23.823780 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6gc9\" (UniqueName: \"kubernetes.io/projected/e5e86011-657c-4ac2-b524-1ce114e266fd-kube-api-access-v6gc9\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:23.824166 kubelet[2899]: I0702 07:05:23.823822 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb7q2\" (UniqueName: \"kubernetes.io/projected/d1cb88b8-1845-44ab-bcdc-544928e6342b-kube-api-access-tb7q2\") pod \"kube-proxy-p6h7q\" (UID: \"d1cb88b8-1845-44ab-bcdc-544928e6342b\") " pod="kube-system/kube-proxy-p6h7q" Jul 2 07:05:23.824166 kubelet[2899]: I0702 07:05:23.823850 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/e5e86011-657c-4ac2-b524-1ce114e266fd-cni\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:23.824166 kubelet[2899]: I0702 07:05:23.823874 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e5e86011-657c-4ac2-b524-1ce114e266fd-xtables-lock\") pod \"kube-flannel-ds-7djtm\" (UID: \"e5e86011-657c-4ac2-b524-1ce114e266fd\") " pod="kube-flannel/kube-flannel-ds-7djtm" Jul 2 07:05:24.035270 containerd[1511]: time="2024-07-02T07:05:24.035210160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6h7q,Uid:d1cb88b8-1845-44ab-bcdc-544928e6342b,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:24.084489 containerd[1511]: time="2024-07-02T07:05:24.084396834Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:24.084489 containerd[1511]: time="2024-07-02T07:05:24.084447035Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:24.084765 containerd[1511]: time="2024-07-02T07:05:24.084706040Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:24.084765 containerd[1511]: time="2024-07-02T07:05:24.084726340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:24.105503 systemd[1]: run-containerd-runc-k8s.io-6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35-runc.l7Mr8j.mount: Deactivated successfully. Jul 2 07:05:24.109968 systemd[1]: Started cri-containerd-6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35.scope - libcontainer container 6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35. Jul 2 07:05:24.131296 containerd[1511]: time="2024-07-02T07:05:24.131260268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-p6h7q,Uid:d1cb88b8-1845-44ab-bcdc-544928e6342b,Namespace:kube-system,Attempt:0,} returns sandbox id \"6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35\"" Jul 2 07:05:24.136271 containerd[1511]: time="2024-07-02T07:05:24.136137054Z" level=info msg="CreateContainer within sandbox \"6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jul 2 07:05:24.330015 containerd[1511]: time="2024-07-02T07:05:24.329723996Z" level=info msg="CreateContainer within sandbox \"6d8814d50f5305ea7715f8498da42d6d64863c51ea6647c5b91f8afdb301bc35\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"1857b227c6c92463d13a839d38b3382614faf44faa1a83055acc933e7d98a96c\"" Jul 2 07:05:24.330971 containerd[1511]: time="2024-07-02T07:05:24.330927918Z" level=info msg="StartContainer for \"1857b227c6c92463d13a839d38b3382614faf44faa1a83055acc933e7d98a96c\"" Jul 2 07:05:24.356929 systemd[1]: Started cri-containerd-1857b227c6c92463d13a839d38b3382614faf44faa1a83055acc933e7d98a96c.scope - libcontainer container 1857b227c6c92463d13a839d38b3382614faf44faa1a83055acc933e7d98a96c. Jul 2 07:05:24.392513 containerd[1511]: time="2024-07-02T07:05:24.392464612Z" level=info msg="StartContainer for \"1857b227c6c92463d13a839d38b3382614faf44faa1a83055acc933e7d98a96c\" returns successfully" Jul 2 07:05:24.925596 containerd[1511]: time="2024-07-02T07:05:24.925530289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7djtm,Uid:e5e86011-657c-4ac2-b524-1ce114e266fd,Namespace:kube-flannel,Attempt:0,}" Jul 2 07:05:24.993360 containerd[1511]: time="2024-07-02T07:05:24.993280694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:24.993591 containerd[1511]: time="2024-07-02T07:05:24.993560299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:24.993705 containerd[1511]: time="2024-07-02T07:05:24.993647400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:24.993705 containerd[1511]: time="2024-07-02T07:05:24.993665700Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:25.023565 systemd[1]: Started cri-containerd-af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526.scope - libcontainer container af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526. Jul 2 07:05:25.066862 containerd[1511]: time="2024-07-02T07:05:25.066818384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-7djtm,Uid:e5e86011-657c-4ac2-b524-1ce114e266fd,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\"" Jul 2 07:05:25.069079 containerd[1511]: time="2024-07-02T07:05:25.068616116Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" Jul 2 07:05:25.945972 systemd[1]: run-containerd-runc-k8s.io-af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526-runc.cw8X62.mount: Deactivated successfully. Jul 2 07:05:27.122654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount202312603.mount: Deactivated successfully. Jul 2 07:05:27.355221 containerd[1511]: time="2024-07-02T07:05:27.354816566Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:27.357945 containerd[1511]: time="2024-07-02T07:05:27.357893618Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" Jul 2 07:05:27.362771 containerd[1511]: time="2024-07-02T07:05:27.362717901Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:27.366109 containerd[1511]: time="2024-07-02T07:05:27.366065858Z" level=info msg="ImageUpdate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:27.369699 containerd[1511]: time="2024-07-02T07:05:27.369658019Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:27.370561 containerd[1511]: time="2024-07-02T07:05:27.370521934Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.301859317s" Jul 2 07:05:27.370710 containerd[1511]: time="2024-07-02T07:05:27.370684736Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" Jul 2 07:05:27.373540 containerd[1511]: time="2024-07-02T07:05:27.373445983Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" Jul 2 07:05:27.431143 containerd[1511]: time="2024-07-02T07:05:27.431083866Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45\"" Jul 2 07:05:27.432005 containerd[1511]: time="2024-07-02T07:05:27.431946380Z" level=info msg="StartContainer for \"a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45\"" Jul 2 07:05:27.462927 systemd[1]: Started cri-containerd-a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45.scope - libcontainer container a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45. Jul 2 07:05:27.485300 systemd[1]: cri-containerd-a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45.scope: Deactivated successfully. Jul 2 07:05:27.490734 containerd[1511]: time="2024-07-02T07:05:27.490681981Z" level=info msg="StartContainer for \"a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45\" returns successfully" Jul 2 07:05:28.040636 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45-rootfs.mount: Deactivated successfully. Jul 2 07:05:28.927599 kubelet[2899]: I0702 07:05:28.041589 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/kube-proxy-p6h7q" podStartSLOduration=5.04153176 podStartE2EDuration="5.04153176s" podCreationTimestamp="2024-07-02 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:25.034492218 +0000 UTC m=+10.147308700" watchObservedRunningTime="2024-07-02 07:05:28.04153176 +0000 UTC m=+13.154348342" Jul 2 07:05:29.377069 containerd[1511]: time="2024-07-02T07:05:29.376992027Z" level=info msg="shim disconnected" id=a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45 namespace=k8s.io Jul 2 07:05:29.377069 containerd[1511]: time="2024-07-02T07:05:29.377063028Z" level=warning msg="cleaning up after shim disconnected" id=a5150ae804b0b69af3ae67693429d6a48da10736590230dc8796f56232e0ad45 namespace=k8s.io Jul 2 07:05:29.377069 containerd[1511]: time="2024-07-02T07:05:29.377076828Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:05:29.391938 containerd[1511]: time="2024-07-02T07:05:29.390604852Z" level=warning msg="cleanup warnings time=\"2024-07-02T07:05:29Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jul 2 07:05:30.036345 containerd[1511]: time="2024-07-02T07:05:30.036293256Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" Jul 2 07:05:32.069841 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2019391300.mount: Deactivated successfully. Jul 2 07:05:33.456820 containerd[1511]: time="2024-07-02T07:05:33.456687053Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:33.463366 containerd[1511]: time="2024-07-02T07:05:33.463327758Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" Jul 2 07:05:33.468544 containerd[1511]: time="2024-07-02T07:05:33.468515240Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:33.473765 containerd[1511]: time="2024-07-02T07:05:33.473723722Z" level=info msg="ImageUpdate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:33.480225 containerd[1511]: time="2024-07-02T07:05:33.480146623Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jul 2 07:05:33.481528 containerd[1511]: time="2024-07-02T07:05:33.481485944Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 3.444900184s" Jul 2 07:05:33.481624 containerd[1511]: time="2024-07-02T07:05:33.481532545Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" Jul 2 07:05:33.483938 containerd[1511]: time="2024-07-02T07:05:33.483907582Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jul 2 07:05:33.519664 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount699441774.mount: Deactivated successfully. Jul 2 07:05:33.536538 containerd[1511]: time="2024-07-02T07:05:33.536347309Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa\"" Jul 2 07:05:33.538291 containerd[1511]: time="2024-07-02T07:05:33.537222923Z" level=info msg="StartContainer for \"542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa\"" Jul 2 07:05:33.570168 systemd[1]: Started cri-containerd-542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa.scope - libcontainer container 542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa. Jul 2 07:05:33.592273 systemd[1]: cri-containerd-542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa.scope: Deactivated successfully. Jul 2 07:05:33.596577 containerd[1511]: time="2024-07-02T07:05:33.596536358Z" level=info msg="StartContainer for \"542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa\" returns successfully" Jul 2 07:05:33.654271 kubelet[2899]: I0702 07:05:33.650013 2899 kubelet_node_status.go:497] "Fast updating node status as it just became ready" Jul 2 07:05:33.735408 kubelet[2899]: I0702 07:05:33.676359 2899 topology_manager.go:215] "Topology Admit Handler" podUID="d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2" podNamespace="kube-system" podName="coredns-76f75df574-6xqd2" Jul 2 07:05:33.735408 kubelet[2899]: I0702 07:05:33.681306 2899 topology_manager.go:215] "Topology Admit Handler" podUID="74c6cca5-719a-4131-a838-d0fc815d6f87" podNamespace="kube-system" podName="coredns-76f75df574-wh9cp" Jul 2 07:05:33.735408 kubelet[2899]: I0702 07:05:33.690498 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v82k9\" (UniqueName: \"kubernetes.io/projected/d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2-kube-api-access-v82k9\") pod \"coredns-76f75df574-6xqd2\" (UID: \"d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2\") " pod="kube-system/coredns-76f75df574-6xqd2" Jul 2 07:05:33.735408 kubelet[2899]: I0702 07:05:33.690544 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2-config-volume\") pod \"coredns-76f75df574-6xqd2\" (UID: \"d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2\") " pod="kube-system/coredns-76f75df574-6xqd2" Jul 2 07:05:33.684455 systemd[1]: Created slice kubepods-burstable-podd2ca2c9e_94ea_47c6_bcd7_582bcc98fec2.slice - libcontainer container kubepods-burstable-podd2ca2c9e_94ea_47c6_bcd7_582bcc98fec2.slice. Jul 2 07:05:33.700892 systemd[1]: Created slice kubepods-burstable-pod74c6cca5_719a_4131_a838_d0fc815d6f87.slice - libcontainer container kubepods-burstable-pod74c6cca5_719a_4131_a838_d0fc815d6f87.slice. Jul 2 07:05:33.791233 kubelet[2899]: I0702 07:05:33.791200 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/74c6cca5-719a-4131-a838-d0fc815d6f87-config-volume\") pod \"coredns-76f75df574-wh9cp\" (UID: \"74c6cca5-719a-4131-a838-d0fc815d6f87\") " pod="kube-system/coredns-76f75df574-wh9cp" Jul 2 07:05:33.791414 kubelet[2899]: I0702 07:05:33.791260 2899 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfqt5\" (UniqueName: \"kubernetes.io/projected/74c6cca5-719a-4131-a838-d0fc815d6f87-kube-api-access-bfqt5\") pod \"coredns-76f75df574-wh9cp\" (UID: \"74c6cca5-719a-4131-a838-d0fc815d6f87\") " pod="kube-system/coredns-76f75df574-wh9cp" Jul 2 07:05:34.036834 containerd[1511]: time="2024-07-02T07:05:34.036667588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6xqd2,Uid:d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:34.037084 containerd[1511]: time="2024-07-02T07:05:34.036667788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wh9cp,Uid:74c6cca5-719a-4131-a838-d0fc815d6f87,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:34.519045 systemd[1]: run-containerd-runc-k8s.io-542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa-runc.nyqhmb.mount: Deactivated successfully. Jul 2 07:05:34.519735 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa-rootfs.mount: Deactivated successfully. Jul 2 07:05:35.129663 containerd[1511]: time="2024-07-02T07:05:35.129589283Z" level=info msg="shim disconnected" id=542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa namespace=k8s.io Jul 2 07:05:35.129663 containerd[1511]: time="2024-07-02T07:05:35.129654684Z" level=warning msg="cleaning up after shim disconnected" id=542429e589dec68472fe8a9639342f0cd32dc5fa1e8760fab8f198f8700d38aa namespace=k8s.io Jul 2 07:05:35.129663 containerd[1511]: time="2024-07-02T07:05:35.129665785Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:05:35.202205 containerd[1511]: time="2024-07-02T07:05:35.202140100Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wh9cp,Uid:74c6cca5-719a-4131-a838-d0fc815d6f87,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:05:35.202480 kubelet[2899]: E0702 07:05:35.202458 2899 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:05:35.202857 kubelet[2899]: E0702 07:05:35.202522 2899 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-wh9cp" Jul 2 07:05:35.202857 kubelet[2899]: E0702 07:05:35.202551 2899 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-wh9cp" Jul 2 07:05:35.203230 kubelet[2899]: E0702 07:05:35.203109 2899 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-wh9cp_kube-system(74c6cca5-719a-4131-a838-d0fc815d6f87)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-wh9cp_kube-system(74c6cca5-719a-4131-a838-d0fc815d6f87)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-wh9cp" podUID="74c6cca5-719a-4131-a838-d0fc815d6f87" Jul 2 07:05:35.204273 containerd[1511]: time="2024-07-02T07:05:35.204226132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6xqd2,Uid:d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:05:35.204670 kubelet[2899]: E0702 07:05:35.204614 2899 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" Jul 2 07:05:35.204773 kubelet[2899]: E0702 07:05:35.204677 2899 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-6xqd2" Jul 2 07:05:35.204773 kubelet[2899]: E0702 07:05:35.204702 2899 kuberuntime_manager.go:1172] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-76f75df574-6xqd2" Jul 2 07:05:35.204862 kubelet[2899]: E0702 07:05:35.204789 2899 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-76f75df574-6xqd2_kube-system(d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-76f75df574-6xqd2_kube-system(d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-76f75df574-6xqd2" podUID="d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2" Jul 2 07:05:35.516376 systemd[1]: run-netns-cni\x2d60eabb7b\x2d4fd9\x2d6176\x2dc746\x2df82bde0cec15.mount: Deactivated successfully. Jul 2 07:05:35.516487 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c295895002f7f8dad0c21f345ef2a25ac23d18dfe444f2a3d5c383b1c07f76fe-shm.mount: Deactivated successfully. Jul 2 07:05:35.516563 systemd[1]: run-netns-cni\x2d7be387e3\x2d896f\x2d4269\x2d865a\x2d4d0b0ecc72e3.mount: Deactivated successfully. Jul 2 07:05:35.516630 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-7d6b29ba0cb3bf9f2b9d271436bc2288fd7ad372fc23e9acfb70b2c5f78e0ab8-shm.mount: Deactivated successfully. Jul 2 07:05:36.052739 containerd[1511]: time="2024-07-02T07:05:36.052685578Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" Jul 2 07:05:36.215862 containerd[1511]: time="2024-07-02T07:05:36.215816759Z" level=info msg="CreateContainer within sandbox \"af49086cd6b5ff9948e13c0bcab6d5877532e177297c08b54b3247e9006db526\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"f05808298c4faccc71c11cad45875f868999a941b245172f5b329c0b838583e4\"" Jul 2 07:05:36.216493 containerd[1511]: time="2024-07-02T07:05:36.216461369Z" level=info msg="StartContainer for \"f05808298c4faccc71c11cad45875f868999a941b245172f5b329c0b838583e4\"" Jul 2 07:05:36.246912 systemd[1]: Started cri-containerd-f05808298c4faccc71c11cad45875f868999a941b245172f5b329c0b838583e4.scope - libcontainer container f05808298c4faccc71c11cad45875f868999a941b245172f5b329c0b838583e4. Jul 2 07:05:36.279833 containerd[1511]: time="2024-07-02T07:05:36.279785932Z" level=info msg="StartContainer for \"f05808298c4faccc71c11cad45875f868999a941b245172f5b329c0b838583e4\" returns successfully" Jul 2 07:05:37.069118 kubelet[2899]: I0702 07:05:37.068515 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-7djtm" podStartSLOduration=5.654734671 podStartE2EDuration="14.068452514s" podCreationTimestamp="2024-07-02 07:05:23 +0000 UTC" firstStartedPulling="2024-07-02 07:05:25.068001305 +0000 UTC m=+10.180817787" lastFinishedPulling="2024-07-02 07:05:33.481719148 +0000 UTC m=+18.594535630" observedRunningTime="2024-07-02 07:05:37.068198511 +0000 UTC m=+22.181015093" watchObservedRunningTime="2024-07-02 07:05:37.068452514 +0000 UTC m=+22.181268996" Jul 2 07:05:37.406277 systemd-networkd[1273]: flannel.1: Link UP Jul 2 07:05:37.406287 systemd-networkd[1273]: flannel.1: Gained carrier Jul 2 07:05:39.426911 systemd-networkd[1273]: flannel.1: Gained IPv6LL Jul 2 07:05:48.976552 containerd[1511]: time="2024-07-02T07:05:48.976483890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wh9cp,Uid:74c6cca5-719a-4131-a838-d0fc815d6f87,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:49.022198 systemd-networkd[1273]: cni0: Link UP Jul 2 07:05:49.022206 systemd-networkd[1273]: cni0: Gained carrier Jul 2 07:05:49.025413 systemd-networkd[1273]: cni0: Lost carrier Jul 2 07:05:49.049860 systemd-networkd[1273]: veth04e31919: Link UP Jul 2 07:05:49.055085 kernel: cni0: port 1(veth04e31919) entered blocking state Jul 2 07:05:49.055192 kernel: cni0: port 1(veth04e31919) entered disabled state Jul 2 07:05:49.057220 kernel: device veth04e31919 entered promiscuous mode Jul 2 07:05:49.063850 kernel: cni0: port 1(veth04e31919) entered blocking state Jul 2 07:05:49.063936 kernel: cni0: port 1(veth04e31919) entered forwarding state Jul 2 07:05:49.063971 kernel: cni0: port 1(veth04e31919) entered disabled state Jul 2 07:05:49.074200 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth04e31919: link becomes ready Jul 2 07:05:49.074282 kernel: cni0: port 1(veth04e31919) entered blocking state Jul 2 07:05:49.074308 kernel: cni0: port 1(veth04e31919) entered forwarding state Jul 2 07:05:49.076223 systemd-networkd[1273]: veth04e31919: Gained carrier Jul 2 07:05:49.077085 systemd-networkd[1273]: cni0: Gained carrier Jul 2 07:05:49.079982 containerd[1511]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00001e928), "name":"cbr0", "type":"bridge"} Jul 2 07:05:49.079982 containerd[1511]: delegateAdd: netconf sent to delegate plugin: Jul 2 07:05:49.100900 containerd[1511]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T07:05:49.100802250Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:49.101167 containerd[1511]: time="2024-07-02T07:05:49.100879851Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:49.101167 containerd[1511]: time="2024-07-02T07:05:49.100903152Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:49.101167 containerd[1511]: time="2024-07-02T07:05:49.100920852Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:49.122948 systemd[1]: Started cri-containerd-c7b7aed2754d30930cb274715e1bbcec9df9bd12bbeffca3c4721dc81b615914.scope - libcontainer container c7b7aed2754d30930cb274715e1bbcec9df9bd12bbeffca3c4721dc81b615914. Jul 2 07:05:49.158408 containerd[1511]: time="2024-07-02T07:05:49.158363418Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-wh9cp,Uid:74c6cca5-719a-4131-a838-d0fc815d6f87,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7b7aed2754d30930cb274715e1bbcec9df9bd12bbeffca3c4721dc81b615914\"" Jul 2 07:05:49.162139 containerd[1511]: time="2024-07-02T07:05:49.162100268Z" level=info msg="CreateContainer within sandbox \"c7b7aed2754d30930cb274715e1bbcec9df9bd12bbeffca3c4721dc81b615914\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:05:49.196446 containerd[1511]: time="2024-07-02T07:05:49.196398125Z" level=info msg="CreateContainer within sandbox \"c7b7aed2754d30930cb274715e1bbcec9df9bd12bbeffca3c4721dc81b615914\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"97a5328e4a99d7b5bea9483a1d52453e6d93c694d5114b469048f11b659abaf4\"" Jul 2 07:05:49.196941 containerd[1511]: time="2024-07-02T07:05:49.196911232Z" level=info msg="StartContainer for \"97a5328e4a99d7b5bea9483a1d52453e6d93c694d5114b469048f11b659abaf4\"" Jul 2 07:05:49.220938 systemd[1]: Started cri-containerd-97a5328e4a99d7b5bea9483a1d52453e6d93c694d5114b469048f11b659abaf4.scope - libcontainer container 97a5328e4a99d7b5bea9483a1d52453e6d93c694d5114b469048f11b659abaf4. Jul 2 07:05:49.250316 containerd[1511]: time="2024-07-02T07:05:49.250210842Z" level=info msg="StartContainer for \"97a5328e4a99d7b5bea9483a1d52453e6d93c694d5114b469048f11b659abaf4\" returns successfully" Jul 2 07:05:50.092838 kubelet[2899]: I0702 07:05:50.092789 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-wh9cp" podStartSLOduration=27.092725965 podStartE2EDuration="27.092725965s" podCreationTimestamp="2024-07-02 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:50.092161258 +0000 UTC m=+35.204977740" watchObservedRunningTime="2024-07-02 07:05:50.092725965 +0000 UTC m=+35.205542547" Jul 2 07:05:50.818967 systemd-networkd[1273]: cni0: Gained IPv6LL Jul 2 07:05:50.882940 systemd-networkd[1273]: veth04e31919: Gained IPv6LL Jul 2 07:05:50.975867 containerd[1511]: time="2024-07-02T07:05:50.975809039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6xqd2,Uid:d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2,Namespace:kube-system,Attempt:0,}" Jul 2 07:05:51.023723 systemd-networkd[1273]: veth016a3894: Link UP Jul 2 07:05:51.028966 kernel: cni0: port 2(veth016a3894) entered blocking state Jul 2 07:05:51.029061 kernel: cni0: port 2(veth016a3894) entered disabled state Jul 2 07:05:51.031093 kernel: device veth016a3894 entered promiscuous mode Jul 2 07:05:51.041829 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth0: link becomes ready Jul 2 07:05:51.041892 kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth016a3894: link becomes ready Jul 2 07:05:51.041932 kernel: cni0: port 2(veth016a3894) entered blocking state Jul 2 07:05:51.044745 kernel: cni0: port 2(veth016a3894) entered forwarding state Jul 2 07:05:51.044888 systemd-networkd[1273]: veth016a3894: Gained carrier Jul 2 07:05:51.046488 containerd[1511]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc000022928), "name":"cbr0", "type":"bridge"} Jul 2 07:05:51.046488 containerd[1511]: delegateAdd: netconf sent to delegate plugin: Jul 2 07:05:51.065858 containerd[1511]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2024-07-02T07:05:51.065724720Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jul 2 07:05:51.065858 containerd[1511]: time="2024-07-02T07:05:51.065796921Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:51.066661 containerd[1511]: time="2024-07-02T07:05:51.066085325Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jul 2 07:05:51.066811 containerd[1511]: time="2024-07-02T07:05:51.066642732Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jul 2 07:05:51.098907 systemd[1]: Started cri-containerd-bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33.scope - libcontainer container bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33. Jul 2 07:05:51.137153 containerd[1511]: time="2024-07-02T07:05:51.137112356Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-76f75df574-6xqd2,Uid:d2ca2c9e-94ea-47c6-bcd7-582bcc98fec2,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33\"" Jul 2 07:05:51.141743 containerd[1511]: time="2024-07-02T07:05:51.141706616Z" level=info msg="CreateContainer within sandbox \"bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jul 2 07:05:51.177646 containerd[1511]: time="2024-07-02T07:05:51.177595386Z" level=info msg="CreateContainer within sandbox \"bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"04496eeaa40023b8a81801e3a1c5534025f85165ddf8d7753dfed03baafd24e6\"" Jul 2 07:05:51.180132 containerd[1511]: time="2024-07-02T07:05:51.178174794Z" level=info msg="StartContainer for \"04496eeaa40023b8a81801e3a1c5534025f85165ddf8d7753dfed03baafd24e6\"" Jul 2 07:05:51.207926 systemd[1]: Started cri-containerd-04496eeaa40023b8a81801e3a1c5534025f85165ddf8d7753dfed03baafd24e6.scope - libcontainer container 04496eeaa40023b8a81801e3a1c5534025f85165ddf8d7753dfed03baafd24e6. Jul 2 07:05:51.236242 containerd[1511]: time="2024-07-02T07:05:51.236036452Z" level=info msg="StartContainer for \"04496eeaa40023b8a81801e3a1c5534025f85165ddf8d7753dfed03baafd24e6\" returns successfully" Jul 2 07:05:52.007266 systemd[1]: run-containerd-runc-k8s.io-bd0ebde733a11a9df175cd182fd91c0090de0e0d0680defbc3c1726d604ffc33-runc.wBe7HT.mount: Deactivated successfully. Jul 2 07:05:52.483052 systemd-networkd[1273]: veth016a3894: Gained IPv6LL Jul 2 07:05:54.047400 kubelet[2899]: I0702 07:05:54.047354 2899 pod_startup_latency_tracker.go:102] "Observed pod startup duration" pod="kube-system/coredns-76f75df574-6xqd2" podStartSLOduration=31.047307068 podStartE2EDuration="31.047307068s" podCreationTimestamp="2024-07-02 07:05:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-07-02 07:05:52.104777528 +0000 UTC m=+37.217594010" watchObservedRunningTime="2024-07-02 07:05:54.047307068 +0000 UTC m=+39.160123550" Jul 2 07:07:12.461020 systemd[1]: Started sshd@5-10.200.8.10:22-10.200.16.10:36450.service - OpenSSH per-connection server daemon (10.200.16.10:36450). Jul 2 07:07:13.102332 sshd[4146]: Accepted publickey for core from 10.200.16.10 port 36450 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:13.104132 sshd[4146]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:13.109647 systemd-logind[1500]: New session 8 of user core. Jul 2 07:07:13.112016 systemd[1]: Started session-8.scope - Session 8 of User core. Jul 2 07:07:13.622982 sshd[4146]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:13.626458 systemd[1]: sshd@5-10.200.8.10:22-10.200.16.10:36450.service: Deactivated successfully. Jul 2 07:07:13.627576 systemd[1]: session-8.scope: Deactivated successfully. Jul 2 07:07:13.628523 systemd-logind[1500]: Session 8 logged out. Waiting for processes to exit. Jul 2 07:07:13.629688 systemd-logind[1500]: Removed session 8. Jul 2 07:07:18.738219 systemd[1]: Started sshd@6-10.200.8.10:22-10.200.16.10:43332.service - OpenSSH per-connection server daemon (10.200.16.10:43332). Jul 2 07:07:19.387562 sshd[4202]: Accepted publickey for core from 10.200.16.10 port 43332 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:19.389288 sshd[4202]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:19.395804 systemd-logind[1500]: New session 9 of user core. Jul 2 07:07:19.397929 systemd[1]: Started session-9.scope - Session 9 of User core. Jul 2 07:07:19.900322 sshd[4202]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:19.904002 systemd[1]: sshd@6-10.200.8.10:22-10.200.16.10:43332.service: Deactivated successfully. Jul 2 07:07:19.905138 systemd[1]: session-9.scope: Deactivated successfully. Jul 2 07:07:19.905919 systemd-logind[1500]: Session 9 logged out. Waiting for processes to exit. Jul 2 07:07:19.906819 systemd-logind[1500]: Removed session 9. Jul 2 07:07:25.013966 systemd[1]: Started sshd@7-10.200.8.10:22-10.200.16.10:43338.service - OpenSSH per-connection server daemon (10.200.16.10:43338). Jul 2 07:07:25.649439 sshd[4238]: Accepted publickey for core from 10.200.16.10 port 43338 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:25.651171 sshd[4238]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:25.656605 systemd-logind[1500]: New session 10 of user core. Jul 2 07:07:25.661947 systemd[1]: Started session-10.scope - Session 10 of User core. Jul 2 07:07:26.172482 sshd[4238]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:26.176063 systemd[1]: sshd@7-10.200.8.10:22-10.200.16.10:43338.service: Deactivated successfully. Jul 2 07:07:26.177199 systemd[1]: session-10.scope: Deactivated successfully. Jul 2 07:07:26.178095 systemd-logind[1500]: Session 10 logged out. Waiting for processes to exit. Jul 2 07:07:26.179333 systemd-logind[1500]: Removed session 10. Jul 2 07:07:26.289920 systemd[1]: Started sshd@8-10.200.8.10:22-10.200.16.10:43348.service - OpenSSH per-connection server daemon (10.200.16.10:43348). Jul 2 07:07:26.929787 sshd[4250]: Accepted publickey for core from 10.200.16.10 port 43348 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:26.931498 sshd[4250]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:26.939187 systemd-logind[1500]: New session 11 of user core. Jul 2 07:07:26.941942 systemd[1]: Started session-11.scope - Session 11 of User core. Jul 2 07:07:27.474715 sshd[4250]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:27.478213 systemd[1]: sshd@8-10.200.8.10:22-10.200.16.10:43348.service: Deactivated successfully. Jul 2 07:07:27.479296 systemd[1]: session-11.scope: Deactivated successfully. Jul 2 07:07:27.479986 systemd-logind[1500]: Session 11 logged out. Waiting for processes to exit. Jul 2 07:07:27.480890 systemd-logind[1500]: Removed session 11. Jul 2 07:07:27.592808 systemd[1]: Started sshd@9-10.200.8.10:22-10.200.16.10:43364.service - OpenSSH per-connection server daemon (10.200.16.10:43364). Jul 2 07:07:28.233490 sshd[4260]: Accepted publickey for core from 10.200.16.10 port 43364 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:28.235081 sshd[4260]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:28.239614 systemd-logind[1500]: New session 12 of user core. Jul 2 07:07:28.244962 systemd[1]: Started session-12.scope - Session 12 of User core. Jul 2 07:07:28.762316 sshd[4260]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:28.766951 systemd[1]: sshd@9-10.200.8.10:22-10.200.16.10:43364.service: Deactivated successfully. Jul 2 07:07:28.767997 systemd[1]: session-12.scope: Deactivated successfully. Jul 2 07:07:28.769876 systemd-logind[1500]: Session 12 logged out. Waiting for processes to exit. Jul 2 07:07:28.771370 systemd-logind[1500]: Removed session 12. Jul 2 07:07:30.578331 update_engine[1501]: I0702 07:07:30.578272 1501 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jul 2 07:07:30.578331 update_engine[1501]: I0702 07:07:30.578322 1501 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jul 2 07:07:30.581669 update_engine[1501]: I0702 07:07:30.578574 1501 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jul 2 07:07:30.581969 update_engine[1501]: I0702 07:07:30.581941 1501 omaha_request_params.cc:62] Current group set to stable Jul 2 07:07:30.582261 update_engine[1501]: I0702 07:07:30.582095 1501 update_attempter.cc:499] Already updated boot flags. Skipping. Jul 2 07:07:30.582261 update_engine[1501]: I0702 07:07:30.582109 1501 update_attempter.cc:643] Scheduling an action processor start. Jul 2 07:07:30.582261 update_engine[1501]: I0702 07:07:30.582129 1501 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 07:07:30.582261 update_engine[1501]: I0702 07:07:30.582170 1501 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jul 2 07:07:30.582261 update_engine[1501]: I0702 07:07:30.582259 1501 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 07:07:30.582604 update_engine[1501]: I0702 07:07:30.582267 1501 omaha_request_action.cc:272] Request: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: Jul 2 07:07:30.582604 update_engine[1501]: I0702 07:07:30.582274 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 07:07:30.583661 locksmithd[1540]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jul 2 07:07:30.584439 update_engine[1501]: I0702 07:07:30.584410 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 07:07:30.584723 update_engine[1501]: I0702 07:07:30.584693 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 07:07:30.611479 update_engine[1501]: E0702 07:07:30.611430 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 07:07:30.611654 update_engine[1501]: I0702 07:07:30.611627 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jul 2 07:07:33.880406 systemd[1]: Started sshd@10-10.200.8.10:22-10.200.16.10:59382.service - OpenSSH per-connection server daemon (10.200.16.10:59382). Jul 2 07:07:34.691784 sshd[4314]: Accepted publickey for core from 10.200.16.10 port 59382 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:34.693289 sshd[4314]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:34.698128 systemd-logind[1500]: New session 13 of user core. Jul 2 07:07:34.700937 systemd[1]: Started session-13.scope - Session 13 of User core. Jul 2 07:07:35.202555 sshd[4314]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:35.206306 systemd[1]: sshd@10-10.200.8.10:22-10.200.16.10:59382.service: Deactivated successfully. Jul 2 07:07:35.207468 systemd[1]: session-13.scope: Deactivated successfully. Jul 2 07:07:35.208380 systemd-logind[1500]: Session 13 logged out. Waiting for processes to exit. Jul 2 07:07:35.209487 systemd-logind[1500]: Removed session 13. Jul 2 07:07:40.319269 systemd[1]: Started sshd@11-10.200.8.10:22-10.200.16.10:43760.service - OpenSSH per-connection server daemon (10.200.16.10:43760). Jul 2 07:07:40.578796 update_engine[1501]: I0702 07:07:40.578444 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 07:07:40.579261 update_engine[1501]: I0702 07:07:40.578967 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 07:07:40.579261 update_engine[1501]: I0702 07:07:40.579196 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 07:07:40.605277 update_engine[1501]: E0702 07:07:40.605227 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 07:07:40.605441 update_engine[1501]: I0702 07:07:40.605394 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jul 2 07:07:40.966740 sshd[4347]: Accepted publickey for core from 10.200.16.10 port 43760 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:40.966483 sshd[4347]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:40.971301 systemd-logind[1500]: New session 14 of user core. Jul 2 07:07:40.977936 systemd[1]: Started session-14.scope - Session 14 of User core. Jul 2 07:07:41.479516 sshd[4347]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:41.482836 systemd[1]: sshd@11-10.200.8.10:22-10.200.16.10:43760.service: Deactivated successfully. Jul 2 07:07:41.483913 systemd[1]: session-14.scope: Deactivated successfully. Jul 2 07:07:41.484717 systemd-logind[1500]: Session 14 logged out. Waiting for processes to exit. Jul 2 07:07:41.485808 systemd-logind[1500]: Removed session 14. Jul 2 07:07:46.593484 systemd[1]: Started sshd@12-10.200.8.10:22-10.200.16.10:43768.service - OpenSSH per-connection server daemon (10.200.16.10:43768). Jul 2 07:07:47.230828 sshd[4380]: Accepted publickey for core from 10.200.16.10 port 43768 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:47.232441 sshd[4380]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:47.237271 systemd-logind[1500]: New session 15 of user core. Jul 2 07:07:47.243957 systemd[1]: Started session-15.scope - Session 15 of User core. Jul 2 07:07:47.740807 sshd[4380]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:47.743627 systemd[1]: sshd@12-10.200.8.10:22-10.200.16.10:43768.service: Deactivated successfully. Jul 2 07:07:47.744892 systemd[1]: session-15.scope: Deactivated successfully. Jul 2 07:07:47.744917 systemd-logind[1500]: Session 15 logged out. Waiting for processes to exit. Jul 2 07:07:47.746041 systemd-logind[1500]: Removed session 15. Jul 2 07:07:47.859287 systemd[1]: Started sshd@13-10.200.8.10:22-10.200.16.10:43780.service - OpenSSH per-connection server daemon (10.200.16.10:43780). Jul 2 07:07:48.495308 sshd[4399]: Accepted publickey for core from 10.200.16.10 port 43780 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:48.497096 sshd[4399]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:48.502028 systemd-logind[1500]: New session 16 of user core. Jul 2 07:07:48.505945 systemd[1]: Started session-16.scope - Session 16 of User core. Jul 2 07:07:49.175266 sshd[4399]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:49.178704 systemd[1]: sshd@13-10.200.8.10:22-10.200.16.10:43780.service: Deactivated successfully. Jul 2 07:07:49.179832 systemd[1]: session-16.scope: Deactivated successfully. Jul 2 07:07:49.180652 systemd-logind[1500]: Session 16 logged out. Waiting for processes to exit. Jul 2 07:07:49.181853 systemd-logind[1500]: Removed session 16. Jul 2 07:07:49.288498 systemd[1]: Started sshd@14-10.200.8.10:22-10.200.16.10:38318.service - OpenSSH per-connection server daemon (10.200.16.10:38318). Jul 2 07:07:49.929289 sshd[4424]: Accepted publickey for core from 10.200.16.10 port 38318 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:49.931024 sshd[4424]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:49.936644 systemd-logind[1500]: New session 17 of user core. Jul 2 07:07:49.940963 systemd[1]: Started session-17.scope - Session 17 of User core. Jul 2 07:07:50.578260 update_engine[1501]: I0702 07:07:50.578205 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 07:07:50.578789 update_engine[1501]: I0702 07:07:50.578548 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 07:07:50.578789 update_engine[1501]: I0702 07:07:50.578778 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 07:07:50.599241 update_engine[1501]: E0702 07:07:50.599207 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 07:07:50.599390 update_engine[1501]: I0702 07:07:50.599349 1501 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jul 2 07:07:51.793689 sshd[4424]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:51.797166 systemd[1]: sshd@14-10.200.8.10:22-10.200.16.10:38318.service: Deactivated successfully. Jul 2 07:07:51.798256 systemd[1]: session-17.scope: Deactivated successfully. Jul 2 07:07:51.799113 systemd-logind[1500]: Session 17 logged out. Waiting for processes to exit. Jul 2 07:07:51.800138 systemd-logind[1500]: Removed session 17. Jul 2 07:07:51.910277 systemd[1]: Started sshd@15-10.200.8.10:22-10.200.16.10:38330.service - OpenSSH per-connection server daemon (10.200.16.10:38330). Jul 2 07:07:52.558591 sshd[4441]: Accepted publickey for core from 10.200.16.10 port 38330 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:52.560154 sshd[4441]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:52.564921 systemd-logind[1500]: New session 18 of user core. Jul 2 07:07:52.568940 systemd[1]: Started session-18.scope - Session 18 of User core. Jul 2 07:07:53.188643 sshd[4441]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:53.192266 systemd[1]: sshd@15-10.200.8.10:22-10.200.16.10:38330.service: Deactivated successfully. Jul 2 07:07:53.193871 systemd[1]: session-18.scope: Deactivated successfully. Jul 2 07:07:53.193907 systemd-logind[1500]: Session 18 logged out. Waiting for processes to exit. Jul 2 07:07:53.195154 systemd-logind[1500]: Removed session 18. Jul 2 07:07:53.304986 systemd[1]: Started sshd@16-10.200.8.10:22-10.200.16.10:38344.service - OpenSSH per-connection server daemon (10.200.16.10:38344). Jul 2 07:07:53.948779 sshd[4472]: Accepted publickey for core from 10.200.16.10 port 38344 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:07:53.950428 sshd[4472]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:07:53.955032 systemd-logind[1500]: New session 19 of user core. Jul 2 07:07:53.960947 systemd[1]: Started session-19.scope - Session 19 of User core. Jul 2 07:07:54.462358 sshd[4472]: pam_unix(sshd:session): session closed for user core Jul 2 07:07:54.465219 systemd[1]: sshd@16-10.200.8.10:22-10.200.16.10:38344.service: Deactivated successfully. Jul 2 07:07:54.466170 systemd[1]: session-19.scope: Deactivated successfully. Jul 2 07:07:54.466946 systemd-logind[1500]: Session 19 logged out. Waiting for processes to exit. Jul 2 07:07:54.467870 systemd-logind[1500]: Removed session 19. Jul 2 07:07:59.580279 systemd[1]: Started sshd@17-10.200.8.10:22-10.200.16.10:52526.service - OpenSSH per-connection server daemon (10.200.16.10:52526). Jul 2 07:08:00.226740 sshd[4511]: Accepted publickey for core from 10.200.16.10 port 52526 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:08:00.228366 sshd[4511]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:08:00.233239 systemd-logind[1500]: New session 20 of user core. Jul 2 07:08:00.243960 systemd[1]: Started session-20.scope - Session 20 of User core. Jul 2 07:08:00.577659 update_engine[1501]: I0702 07:08:00.577591 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 07:08:00.578210 update_engine[1501]: I0702 07:08:00.577991 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 07:08:00.578287 update_engine[1501]: I0702 07:08:00.578238 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 07:08:00.648212 update_engine[1501]: E0702 07:08:00.648166 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 07:08:00.648400 update_engine[1501]: I0702 07:08:00.648303 1501 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 07:08:00.648400 update_engine[1501]: I0702 07:08:00.648315 1501 omaha_request_action.cc:617] Omaha request response: Jul 2 07:08:00.648499 update_engine[1501]: E0702 07:08:00.648417 1501 omaha_request_action.cc:636] Omaha request network transfer failed. Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648442 1501 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648447 1501 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648450 1501 update_attempter.cc:306] Processing Done. Jul 2 07:08:00.648499 update_engine[1501]: E0702 07:08:00.648469 1501 update_attempter.cc:619] Update failed. Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648475 1501 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648479 1501 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jul 2 07:08:00.648499 update_engine[1501]: I0702 07:08:00.648485 1501 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jul 2 07:08:00.648802 update_engine[1501]: I0702 07:08:00.648571 1501 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jul 2 07:08:00.648802 update_engine[1501]: I0702 07:08:00.648595 1501 omaha_request_action.cc:271] Posting an Omaha request to disabled Jul 2 07:08:00.648802 update_engine[1501]: I0702 07:08:00.648600 1501 omaha_request_action.cc:272] Request: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: Jul 2 07:08:00.648802 update_engine[1501]: I0702 07:08:00.648605 1501 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jul 2 07:08:00.649162 update_engine[1501]: I0702 07:08:00.648828 1501 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jul 2 07:08:00.649162 update_engine[1501]: I0702 07:08:00.648999 1501 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jul 2 07:08:00.649419 locksmithd[1540]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jul 2 07:08:00.670649 update_engine[1501]: E0702 07:08:00.670603 1501 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670743 1501 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670775 1501 omaha_request_action.cc:617] Omaha request response: Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670782 1501 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670786 1501 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670791 1501 update_attempter.cc:306] Processing Done. Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670797 1501 update_attempter.cc:310] Error event sent. Jul 2 07:08:00.670843 update_engine[1501]: I0702 07:08:00.670807 1501 update_check_scheduler.cc:74] Next update check in 40m19s Jul 2 07:08:00.671217 locksmithd[1540]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jul 2 07:08:00.749204 sshd[4511]: pam_unix(sshd:session): session closed for user core Jul 2 07:08:00.752836 systemd[1]: sshd@17-10.200.8.10:22-10.200.16.10:52526.service: Deactivated successfully. Jul 2 07:08:00.753982 systemd[1]: session-20.scope: Deactivated successfully. Jul 2 07:08:00.754675 systemd-logind[1500]: Session 20 logged out. Waiting for processes to exit. Jul 2 07:08:00.755592 systemd-logind[1500]: Removed session 20. Jul 2 07:08:05.869175 systemd[1]: Started sshd@18-10.200.8.10:22-10.200.16.10:52542.service - OpenSSH per-connection server daemon (10.200.16.10:52542). Jul 2 07:08:06.513616 sshd[4545]: Accepted publickey for core from 10.200.16.10 port 52542 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:08:06.515394 sshd[4545]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:08:06.522802 systemd-logind[1500]: New session 21 of user core. Jul 2 07:08:06.524927 systemd[1]: Started session-21.scope - Session 21 of User core. Jul 2 07:08:07.028085 sshd[4545]: pam_unix(sshd:session): session closed for user core Jul 2 07:08:07.030995 systemd[1]: sshd@18-10.200.8.10:22-10.200.16.10:52542.service: Deactivated successfully. Jul 2 07:08:07.032194 systemd-logind[1500]: Session 21 logged out. Waiting for processes to exit. Jul 2 07:08:07.032277 systemd[1]: session-21.scope: Deactivated successfully. Jul 2 07:08:07.033506 systemd-logind[1500]: Removed session 21. Jul 2 07:08:12.142024 systemd[1]: Started sshd@19-10.200.8.10:22-10.200.16.10:57306.service - OpenSSH per-connection server daemon (10.200.16.10:57306). Jul 2 07:08:12.783671 sshd[4581]: Accepted publickey for core from 10.200.16.10 port 57306 ssh2: RSA SHA256:Vdpuwv5gh2GKoqfNfevDgb5gK9TYVezD5lrsriYemy4 Jul 2 07:08:12.784334 sshd[4581]: pam_unix(sshd:session): session opened for user core(uid=500) by (uid=0) Jul 2 07:08:12.789630 systemd-logind[1500]: New session 22 of user core. Jul 2 07:08:12.793957 systemd[1]: Started session-22.scope - Session 22 of User core. Jul 2 07:08:13.294259 sshd[4581]: pam_unix(sshd:session): session closed for user core Jul 2 07:08:13.297743 systemd[1]: sshd@19-10.200.8.10:22-10.200.16.10:57306.service: Deactivated successfully. Jul 2 07:08:13.298940 systemd[1]: session-22.scope: Deactivated successfully. Jul 2 07:08:13.299899 systemd-logind[1500]: Session 22 logged out. Waiting for processes to exit. Jul 2 07:08:13.301037 systemd-logind[1500]: Removed session 22. Jul 2 07:08:28.336195 systemd[1]: cri-containerd-41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e.scope: Deactivated successfully. Jul 2 07:08:28.336497 systemd[1]: cri-containerd-41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e.scope: Consumed 2.784s CPU time. Jul 2 07:08:28.358737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e-rootfs.mount: Deactivated successfully. Jul 2 07:08:28.372245 containerd[1511]: time="2024-07-02T07:08:28.372172958Z" level=info msg="shim disconnected" id=41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e namespace=k8s.io Jul 2 07:08:28.372245 containerd[1511]: time="2024-07-02T07:08:28.372244559Z" level=warning msg="cleaning up after shim disconnected" id=41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e namespace=k8s.io Jul 2 07:08:28.372728 containerd[1511]: time="2024-07-02T07:08:28.372256559Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:08:28.394297 kubelet[2899]: I0702 07:08:28.394163 2899 scope.go:117] "RemoveContainer" containerID="41e0a16ee955e4513d28d5b64dc31aff44dfcfd4e000bcc6b330aab1c391ac8e" Jul 2 07:08:28.397245 containerd[1511]: time="2024-07-02T07:08:28.397200007Z" level=info msg="CreateContainer within sandbox \"0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jul 2 07:08:28.448534 containerd[1511]: time="2024-07-02T07:08:28.448471017Z" level=info msg="CreateContainer within sandbox \"0c83c6a6b8569eb2c422e6ecada53f4f1f2472b659ad6afe626303be43e8e58b\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"68fe365d0aaf1a7317bd85e5e1d3c6867a6b264ad43a85a3058606c28f6f2499\"" Jul 2 07:08:28.449136 containerd[1511]: time="2024-07-02T07:08:28.449101423Z" level=info msg="StartContainer for \"68fe365d0aaf1a7317bd85e5e1d3c6867a6b264ad43a85a3058606c28f6f2499\"" Jul 2 07:08:28.481987 systemd[1]: Started cri-containerd-68fe365d0aaf1a7317bd85e5e1d3c6867a6b264ad43a85a3058606c28f6f2499.scope - libcontainer container 68fe365d0aaf1a7317bd85e5e1d3c6867a6b264ad43a85a3058606c28f6f2499. Jul 2 07:08:28.525969 containerd[1511]: time="2024-07-02T07:08:28.525920987Z" level=info msg="StartContainer for \"68fe365d0aaf1a7317bd85e5e1d3c6867a6b264ad43a85a3058606c28f6f2499\" returns successfully" Jul 2 07:08:32.869250 kubelet[2899]: E0702 07:08:32.868574 2899 event.go:346] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:53434->10.200.8.21:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-3815.2.5-a-54ab6c74aa.17de53ab2f0c3c2c kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-3815.2.5-a-54ab6c74aa,UID:9cf91b813f0f759081652318b3434bdf,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-3815.2.5-a-54ab6c74aa,},FirstTimestamp:2024-07-02 07:08:22.395952172 +0000 UTC m=+187.508768754,LastTimestamp:2024-07-02 07:08:22.395952172 +0000 UTC m=+187.508768754,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-3815.2.5-a-54ab6c74aa,}" Jul 2 07:08:33.124908 kubelet[2899]: E0702 07:08:33.124450 2899 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:53608->10.200.8.21:2379: read: connection timed out" Jul 2 07:08:33.130509 systemd[1]: cri-containerd-4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225.scope: Deactivated successfully. Jul 2 07:08:33.130820 systemd[1]: cri-containerd-4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225.scope: Consumed 1.748s CPU time. Jul 2 07:08:33.152365 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225-rootfs.mount: Deactivated successfully. Jul 2 07:08:33.171701 containerd[1511]: time="2024-07-02T07:08:33.171633571Z" level=info msg="shim disconnected" id=4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225 namespace=k8s.io Jul 2 07:08:33.171701 containerd[1511]: time="2024-07-02T07:08:33.171702172Z" level=warning msg="cleaning up after shim disconnected" id=4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225 namespace=k8s.io Jul 2 07:08:33.172200 containerd[1511]: time="2024-07-02T07:08:33.171713672Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jul 2 07:08:33.408587 kubelet[2899]: I0702 07:08:33.407956 2899 scope.go:117] "RemoveContainer" containerID="4071bbfc1478a365c63102aadcb7ada58244af738d66f0c04fefdef61f042225" Jul 2 07:08:33.410489 containerd[1511]: time="2024-07-02T07:08:33.410437244Z" level=info msg="CreateContainer within sandbox \"ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jul 2 07:08:33.453430 containerd[1511]: time="2024-07-02T07:08:33.453317970Z" level=info msg="CreateContainer within sandbox \"ad52777672f7b5f94ebf580fd597409f265ef8d11941c26e193ef3531f1c605d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172\"" Jul 2 07:08:33.454112 containerd[1511]: time="2024-07-02T07:08:33.454069977Z" level=info msg="StartContainer for \"c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172\"" Jul 2 07:08:33.489902 systemd[1]: Started cri-containerd-c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172.scope - libcontainer container c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172. Jul 2 07:08:33.530845 containerd[1511]: time="2024-07-02T07:08:33.530799740Z" level=info msg="StartContainer for \"c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172\" returns successfully" Jul 2 07:08:34.155076 systemd[1]: run-containerd-runc-k8s.io-c81232e8756c0c554875e55e2ecec024b40d3d22cf0b1561885b52394f1ca172-runc.1s1HMJ.mount: Deactivated successfully. Jul 2 07:08:38.819995 kubelet[2899]: I0702 07:08:38.819951 2899 status_manager.go:853] "Failed to get status for pod" podUID="ac5e7b9b02f93de61850ff1926b6e375" pod="kube-system/kube-controller-manager-ci-3815.2.5-a-54ab6c74aa" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.200.8.10:53548->10.200.8.21:2379: read: connection timed out" Jul 2 07:08:43.125513 kubelet[2899]: E0702 07:08:43.125189 2899 controller.go:195] "Failed to update lease" err="Put \"https://10.200.8.10:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-3815.2.5-a-54ab6c74aa?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jul 2 07:08:49.045217 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.056319 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.067383 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.078382 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.089440 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.100278 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.100595 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.107338 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.107581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.114433 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.114686 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.121196 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.121492 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.128182 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.128397 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.139279 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.139528 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.142979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.146716 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.150474 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.157832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.158077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.161986 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.162224 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.169088 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.172826 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.173037 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.180049 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.180262 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.187052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.187324 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.194299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.194509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.201122 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.201385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.208139 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.215041 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.215328 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.222314 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.222577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.222714 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.229299 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.229534 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.236042 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.239766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.243408 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.243547 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.249945 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.250150 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.256897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.257100 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.263851 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.267455 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.267835 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.274941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.275191 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.281739 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.288519 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.288702 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.288864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.295325 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.295563 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.306644 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.310090 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.310218 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.310347 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.316973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.317174 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.320773 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.327659 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.331674 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.331909 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.338536 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.338765 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.345031 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.345258 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.351766 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.355360 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.355510 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.361911 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.362123 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.368476 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.371833 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.375366 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.375515 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.386062 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.386306 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.386446 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.393033 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.393245 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.399801 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.401198 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.407012 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.407298 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.413973 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.418027 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.418215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.424663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.424983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.431847 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.432092 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.439385 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.442923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.443091 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.449463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.449769 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.456495 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.456693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.464225 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.468228 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.468369 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.475862 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.476077 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.482920 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.483183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.489572 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.489864 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.496897 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.500521 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.500677 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.506981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.507232 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.513983 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.514194 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.520967 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.521175 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.528274 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.528480 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.535145 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.535432 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.541932 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.542215 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.549035 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.553183 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.553372 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.560190 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.560415 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.566809 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.567116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.574116 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.577779 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.577995 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.584768 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.584979 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.591900 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.595462 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.595605 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.602509 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.602744 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.610009 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.610266 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.616821 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.617056 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.623693 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.623923 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.630764 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.630962 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.638102 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.638323 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.641859 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.648793 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.648981 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.655854 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.656052 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.663057 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.666710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.666938 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.673830 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.674050 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.680581 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.680819 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.687577 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.687832 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.694448 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.698023 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.698260 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.705047 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.705263 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.712320 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.712559 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.715841 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.723180 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.723393 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.729850 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.730125 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.737085 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.737305 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.744235 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.744463 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.751105 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.754710 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.754937 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.761663 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.761887 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.768743 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.769026 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.775592 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.779922 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.780167 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.787005 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.787214 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.793941 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.794140 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.810555 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.810822 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.817481 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001 Jul 2 07:08:49.817745 kernel: hv_storvsc f8b3781a-1e82-4818-a1c3-63d806ec15bb: tag#52 cmd 0x2a status: scsi 0x2 srb 0x4 hv 0xc0000001