May 13 23:58:05.058810 kernel: Linux version 6.6.89-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue May 13 22:08:35 -00 2025 May 13 23:58:05.058844 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:05.058861 kernel: BIOS-provided physical RAM map: May 13 23:58:05.058871 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009ffff] usable May 13 23:58:05.058881 kernel: BIOS-e820: [mem 0x00000000000c0000-0x00000000000fffff] reserved May 13 23:58:05.058891 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000003ff40fff] usable May 13 23:58:05.058904 kernel: BIOS-e820: [mem 0x000000003ff41000-0x000000003ffc8fff] reserved May 13 23:58:05.058915 kernel: BIOS-e820: [mem 0x000000003ffc9000-0x000000003fffafff] ACPI data May 13 23:58:05.058928 kernel: BIOS-e820: [mem 0x000000003fffb000-0x000000003fffefff] ACPI NVS May 13 23:58:05.058938 kernel: BIOS-e820: [mem 0x000000003ffff000-0x000000003fffffff] usable May 13 23:58:05.058949 kernel: BIOS-e820: [mem 0x0000000100000000-0x00000002bfffffff] usable May 13 23:58:05.058959 kernel: printk: bootconsole [earlyser0] enabled May 13 23:58:05.058969 kernel: NX (Execute Disable) protection: active May 13 23:58:05.058980 kernel: APIC: Static calls initialized May 13 23:58:05.058996 kernel: efi: EFI v2.7 by Microsoft May 13 23:58:05.059009 kernel: efi: ACPI=0x3fffa000 ACPI 2.0=0x3fffa014 SMBIOS=0x3ff85000 SMBIOS 3.0=0x3ff83000 MEMATTR=0x3f5c1a98 RNG=0x3ffd1018 May 13 23:58:05.059021 kernel: random: crng init done May 13 23:58:05.059033 kernel: secureboot: Secure boot disabled May 13 23:58:05.059044 kernel: SMBIOS 3.1.0 present. May 13 23:58:05.059056 kernel: DMI: Microsoft Corporation Virtual Machine/Virtual Machine, BIOS Hyper-V UEFI Release v4.1 03/08/2024 May 13 23:58:05.059068 kernel: Hypervisor detected: Microsoft Hyper-V May 13 23:58:05.059079 kernel: Hyper-V: privilege flags low 0x2e7f, high 0x3b8030, hints 0x64e24, misc 0xbed7b2 May 13 23:58:05.059091 kernel: Hyper-V: Host Build 10.0.20348.1827-1-0 May 13 23:58:05.059102 kernel: Hyper-V: Nested features: 0x1e0101 May 13 23:58:05.059114 kernel: Hyper-V: LAPIC Timer Frequency: 0x30d40 May 13 23:58:05.059128 kernel: Hyper-V: Using hypercall for remote TLB flush May 13 23:58:05.059140 kernel: clocksource: hyperv_clocksource_tsc_page: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:58:05.059151 kernel: clocksource: hyperv_clocksource_msr: mask: 0xffffffffffffffff max_cycles: 0x24e6a1710, max_idle_ns: 440795202120 ns May 13 23:58:05.059163 kernel: tsc: Marking TSC unstable due to running on Hyper-V May 13 23:58:05.059175 kernel: tsc: Detected 2593.905 MHz processor May 13 23:58:05.059199 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved May 13 23:58:05.059212 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable May 13 23:58:05.059223 kernel: last_pfn = 0x2c0000 max_arch_pfn = 0x400000000 May 13 23:58:05.059234 kernel: MTRR map: 4 entries (2 fixed + 2 variable; max 18), built from 8 variable MTRRs May 13 23:58:05.059250 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT May 13 23:58:05.059262 kernel: e820: update [mem 0x40000000-0xffffffff] usable ==> reserved May 13 23:58:05.059274 kernel: last_pfn = 0x40000 max_arch_pfn = 0x400000000 May 13 23:58:05.059288 kernel: Using GB pages for direct mapping May 13 23:58:05.059300 kernel: ACPI: Early table checksum verification disabled May 13 23:58:05.059312 kernel: ACPI: RSDP 0x000000003FFFA014 000024 (v02 VRTUAL) May 13 23:58:05.059327 kernel: ACPI: XSDT 0x000000003FFF90E8 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059342 kernel: ACPI: FACP 0x000000003FFF8000 000114 (v06 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059354 kernel: ACPI: DSDT 0x000000003FFD6000 01E184 (v02 MSFTVM DSDT01 00000001 MSFT 05000000) May 13 23:58:05.059365 kernel: ACPI: FACS 0x000000003FFFE000 000040 May 13 23:58:05.059376 kernel: ACPI: OEM0 0x000000003FFF7000 000064 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059388 kernel: ACPI: SPCR 0x000000003FFF6000 000050 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059400 kernel: ACPI: WAET 0x000000003FFF5000 000028 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059411 kernel: ACPI: APIC 0x000000003FFD5000 000058 (v04 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059426 kernel: ACPI: SRAT 0x000000003FFD4000 0002D0 (v02 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059437 kernel: ACPI: BGRT 0x000000003FFD3000 000038 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059450 kernel: ACPI: FPDT 0x000000003FFD2000 000034 (v01 VRTUAL MICROSFT 00000001 MSFT 00000001) May 13 23:58:05.059462 kernel: ACPI: Reserving FACP table memory at [mem 0x3fff8000-0x3fff8113] May 13 23:58:05.059474 kernel: ACPI: Reserving DSDT table memory at [mem 0x3ffd6000-0x3fff4183] May 13 23:58:05.059486 kernel: ACPI: Reserving FACS table memory at [mem 0x3fffe000-0x3fffe03f] May 13 23:58:05.059498 kernel: ACPI: Reserving OEM0 table memory at [mem 0x3fff7000-0x3fff7063] May 13 23:58:05.059510 kernel: ACPI: Reserving SPCR table memory at [mem 0x3fff6000-0x3fff604f] May 13 23:58:05.059525 kernel: ACPI: Reserving WAET table memory at [mem 0x3fff5000-0x3fff5027] May 13 23:58:05.059537 kernel: ACPI: Reserving APIC table memory at [mem 0x3ffd5000-0x3ffd5057] May 13 23:58:05.059550 kernel: ACPI: Reserving SRAT table memory at [mem 0x3ffd4000-0x3ffd42cf] May 13 23:58:05.059563 kernel: ACPI: Reserving BGRT table memory at [mem 0x3ffd3000-0x3ffd3037] May 13 23:58:05.059575 kernel: ACPI: Reserving FPDT table memory at [mem 0x3ffd2000-0x3ffd2033] May 13 23:58:05.059588 kernel: SRAT: PXM 0 -> APIC 0x00 -> Node 0 May 13 23:58:05.059600 kernel: SRAT: PXM 0 -> APIC 0x01 -> Node 0 May 13 23:58:05.059613 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x00000000-0x3fffffff] hotplug May 13 23:58:05.059626 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000-0x2bfffffff] hotplug May 13 23:58:05.059641 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2c0000000-0xfdfffffff] hotplug May 13 23:58:05.059654 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000-0xffffffffff] hotplug May 13 23:58:05.059667 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x10000000000-0x1ffffffffff] hotplug May 13 23:58:05.059679 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x20000000000-0x3ffffffffff] hotplug May 13 23:58:05.059692 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000000-0x7ffffffffff] hotplug May 13 23:58:05.059705 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x80000000000-0xfffffffffff] hotplug May 13 23:58:05.059718 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x100000000000-0x1fffffffffff] hotplug May 13 23:58:05.059730 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x200000000000-0x3fffffffffff] hotplug May 13 23:58:05.059743 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x400000000000-0x7fffffffffff] hotplug May 13 23:58:05.059758 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x800000000000-0xffffffffffff] hotplug May 13 23:58:05.059770 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x1000000000000-0x1ffffffffffff] hotplug May 13 23:58:05.059783 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x2000000000000-0x3ffffffffffff] hotplug May 13 23:58:05.059796 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x4000000000000-0x7ffffffffffff] hotplug May 13 23:58:05.059808 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x8000000000000-0xfffffffffffff] hotplug May 13 23:58:05.059821 kernel: NUMA: Node 0 [mem 0x00000000-0x3fffffff] + [mem 0x100000000-0x2bfffffff] -> [mem 0x00000000-0x2bfffffff] May 13 23:58:05.059835 kernel: NODE_DATA(0) allocated [mem 0x2bfffa000-0x2bfffffff] May 13 23:58:05.059848 kernel: Zone ranges: May 13 23:58:05.059861 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] May 13 23:58:05.059877 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] May 13 23:58:05.059890 kernel: Normal [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:58:05.059903 kernel: Movable zone start for each node May 13 23:58:05.059916 kernel: Early memory node ranges May 13 23:58:05.059929 kernel: node 0: [mem 0x0000000000001000-0x000000000009ffff] May 13 23:58:05.059942 kernel: node 0: [mem 0x0000000000100000-0x000000003ff40fff] May 13 23:58:05.059956 kernel: node 0: [mem 0x000000003ffff000-0x000000003fffffff] May 13 23:58:05.059968 kernel: node 0: [mem 0x0000000100000000-0x00000002bfffffff] May 13 23:58:05.059980 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x00000002bfffffff] May 13 23:58:05.059995 kernel: On node 0, zone DMA: 1 pages in unavailable ranges May 13 23:58:05.060007 kernel: On node 0, zone DMA: 96 pages in unavailable ranges May 13 23:58:05.060019 kernel: On node 0, zone DMA32: 190 pages in unavailable ranges May 13 23:58:05.060032 kernel: ACPI: PM-Timer IO Port: 0x408 May 13 23:58:05.060044 kernel: ACPI: LAPIC_NMI (acpi_id[0x01] dfl dfl lint[0x1]) May 13 23:58:05.060056 kernel: IOAPIC[0]: apic_id 2, version 17, address 0xfec00000, GSI 0-23 May 13 23:58:05.060069 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) May 13 23:58:05.060081 kernel: ACPI: Using ACPI (MADT) for SMP configuration information May 13 23:58:05.060093 kernel: ACPI: SPCR: console: uart,io,0x3f8,115200 May 13 23:58:05.060108 kernel: smpboot: Allowing 2 CPUs, 0 hotplug CPUs May 13 23:58:05.060120 kernel: [mem 0x40000000-0xffffffff] available for PCI devices May 13 23:58:05.060132 kernel: Booting paravirtualized kernel on Hyper-V May 13 23:58:05.060145 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns May 13 23:58:05.060157 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 May 13 23:58:05.060169 kernel: percpu: Embedded 58 pages/cpu s197032 r8192 d32344 u1048576 May 13 23:58:05.060193 kernel: pcpu-alloc: s197032 r8192 d32344 u1048576 alloc=1*2097152 May 13 23:58:05.060213 kernel: pcpu-alloc: [0] 0 1 May 13 23:58:05.060225 kernel: Hyper-V: PV spinlocks enabled May 13 23:58:05.060241 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) May 13 23:58:05.060255 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:05.060268 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. May 13 23:58:05.060280 kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) May 13 23:58:05.060292 kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) May 13 23:58:05.060305 kernel: Fallback order for Node 0: 0 May 13 23:58:05.060317 kernel: Built 1 zonelists, mobility grouping on. Total pages: 2062618 May 13 23:58:05.060329 kernel: Policy zone: Normal May 13 23:58:05.060353 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off May 13 23:58:05.060366 kernel: software IO TLB: area num 2. May 13 23:58:05.060379 kernel: Memory: 8072992K/8387460K available (14336K kernel code, 2296K rwdata, 25068K rodata, 43604K init, 1468K bss, 314212K reserved, 0K cma-reserved) May 13 23:58:05.060395 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 May 13 23:58:05.060408 kernel: ftrace: allocating 37993 entries in 149 pages May 13 23:58:05.060421 kernel: ftrace: allocated 149 pages with 4 groups May 13 23:58:05.060434 kernel: Dynamic Preempt: voluntary May 13 23:58:05.060446 kernel: rcu: Preemptible hierarchical RCU implementation. May 13 23:58:05.060461 kernel: rcu: RCU event tracing is enabled. May 13 23:58:05.060474 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. May 13 23:58:05.060490 kernel: Trampoline variant of Tasks RCU enabled. May 13 23:58:05.060502 kernel: Rude variant of Tasks RCU enabled. May 13 23:58:05.060516 kernel: Tracing variant of Tasks RCU enabled. May 13 23:58:05.060529 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. May 13 23:58:05.060542 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 May 13 23:58:05.060554 kernel: Using NULL legacy PIC May 13 23:58:05.060570 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 0 May 13 23:58:05.060583 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. May 13 23:58:05.060596 kernel: Console: colour dummy device 80x25 May 13 23:58:05.060609 kernel: printk: console [tty1] enabled May 13 23:58:05.060621 kernel: printk: console [ttyS0] enabled May 13 23:58:05.060634 kernel: printk: bootconsole [earlyser0] disabled May 13 23:58:05.060647 kernel: ACPI: Core revision 20230628 May 13 23:58:05.060660 kernel: Failed to register legacy timer interrupt May 13 23:58:05.060673 kernel: APIC: Switch to symmetric I/O mode setup May 13 23:58:05.060686 kernel: Hyper-V: enabling crash_kexec_post_notifiers May 13 23:58:05.060701 kernel: Hyper-V: Using IPI hypercalls May 13 23:58:05.060714 kernel: APIC: send_IPI() replaced with hv_send_ipi() May 13 23:58:05.060727 kernel: APIC: send_IPI_mask() replaced with hv_send_ipi_mask() May 13 23:58:05.060740 kernel: APIC: send_IPI_mask_allbutself() replaced with hv_send_ipi_mask_allbutself() May 13 23:58:05.060753 kernel: APIC: send_IPI_allbutself() replaced with hv_send_ipi_allbutself() May 13 23:58:05.060766 kernel: APIC: send_IPI_all() replaced with hv_send_ipi_all() May 13 23:58:05.060779 kernel: APIC: send_IPI_self() replaced with hv_send_ipi_self() May 13 23:58:05.060792 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 5187.81 BogoMIPS (lpj=2593905) May 13 23:58:05.060808 kernel: Last level iTLB entries: 4KB 64, 2MB 8, 4MB 8 May 13 23:58:05.060820 kernel: Last level dTLB entries: 4KB 64, 2MB 32, 4MB 32, 1GB 4 May 13 23:58:05.060833 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization May 13 23:58:05.060846 kernel: Spectre V2 : Mitigation: Retpolines May 13 23:58:05.060859 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT May 13 23:58:05.060872 kernel: RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible! May 13 23:58:05.060885 kernel: RETBleed: Vulnerable May 13 23:58:05.060897 kernel: Speculative Store Bypass: Vulnerable May 13 23:58:05.060910 kernel: TAA: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:58:05.060923 kernel: MMIO Stale Data: Vulnerable: Clear CPU buffers attempted, no microcode May 13 23:58:05.060936 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' May 13 23:58:05.060951 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' May 13 23:58:05.060964 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' May 13 23:58:05.060976 kernel: x86/fpu: Supporting XSAVE feature 0x020: 'AVX-512 opmask' May 13 23:58:05.060989 kernel: x86/fpu: Supporting XSAVE feature 0x040: 'AVX-512 Hi256' May 13 23:58:05.061001 kernel: x86/fpu: Supporting XSAVE feature 0x080: 'AVX-512 ZMM_Hi256' May 13 23:58:05.061014 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 May 13 23:58:05.061027 kernel: x86/fpu: xstate_offset[5]: 832, xstate_sizes[5]: 64 May 13 23:58:05.061039 kernel: x86/fpu: xstate_offset[6]: 896, xstate_sizes[6]: 512 May 13 23:58:05.061052 kernel: x86/fpu: xstate_offset[7]: 1408, xstate_sizes[7]: 1024 May 13 23:58:05.061065 kernel: x86/fpu: Enabled xstate features 0xe7, context size is 2432 bytes, using 'compacted' format. May 13 23:58:05.061078 kernel: Freeing SMP alternatives memory: 32K May 13 23:58:05.061093 kernel: pid_max: default: 32768 minimum: 301 May 13 23:58:05.061106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity May 13 23:58:05.061118 kernel: landlock: Up and running. May 13 23:58:05.061131 kernel: SELinux: Initializing. May 13 23:58:05.061144 kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:58:05.061157 kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear) May 13 23:58:05.061170 kernel: smpboot: CPU0: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz (family: 0x6, model: 0x55, stepping: 0x7) May 13 23:58:05.061198 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:58:05.061212 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:58:05.061225 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. May 13 23:58:05.061241 kernel: Performance Events: unsupported p6 CPU model 85 no PMU driver, software events only. May 13 23:58:05.061255 kernel: signal: max sigframe size: 3632 May 13 23:58:05.061268 kernel: rcu: Hierarchical SRCU implementation. May 13 23:58:05.061282 kernel: rcu: Max phase no-delay instances is 400. May 13 23:58:05.061296 kernel: NMI watchdog: Perf NMI watchdog permanently disabled May 13 23:58:05.061309 kernel: smp: Bringing up secondary CPUs ... May 13 23:58:05.061322 kernel: smpboot: x86: Booting SMP configuration: May 13 23:58:05.061341 kernel: .... node #0, CPUs: #1 May 13 23:58:05.061367 kernel: TAA CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/tsx_async_abort.html for more details. May 13 23:58:05.061398 kernel: MMIO Stale Data CPU bug present and SMT on, data leak possible. See https://www.kernel.org/doc/html/latest/admin-guide/hw-vuln/processor_mmio_stale_data.html for more details. May 13 23:58:05.061418 kernel: smp: Brought up 1 node, 2 CPUs May 13 23:58:05.061432 kernel: smpboot: Max logical packages: 1 May 13 23:58:05.061447 kernel: smpboot: Total of 2 processors activated (10375.62 BogoMIPS) May 13 23:58:05.061461 kernel: devtmpfs: initialized May 13 23:58:05.061475 kernel: x86/mm: Memory block size: 128MB May 13 23:58:05.061490 kernel: ACPI: PM: Registering ACPI NVS region [mem 0x3fffb000-0x3fffefff] (16384 bytes) May 13 23:58:05.061504 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns May 13 23:58:05.061518 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) May 13 23:58:05.061536 kernel: pinctrl core: initialized pinctrl subsystem May 13 23:58:05.061550 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family May 13 23:58:05.061564 kernel: audit: initializing netlink subsys (disabled) May 13 23:58:05.061578 kernel: audit: type=2000 audit(1747180684.027:1): state=initialized audit_enabled=0 res=1 May 13 23:58:05.061592 kernel: thermal_sys: Registered thermal governor 'step_wise' May 13 23:58:05.061606 kernel: thermal_sys: Registered thermal governor 'user_space' May 13 23:58:05.061620 kernel: cpuidle: using governor menu May 13 23:58:05.061634 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 May 13 23:58:05.061649 kernel: dca service started, version 1.12.1 May 13 23:58:05.061665 kernel: e820: reserve RAM buffer [mem 0x3ff41000-0x3fffffff] May 13 23:58:05.061680 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. May 13 23:58:05.061694 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages May 13 23:58:05.061708 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page May 13 23:58:05.061722 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages May 13 23:58:05.061736 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page May 13 23:58:05.061750 kernel: ACPI: Added _OSI(Module Device) May 13 23:58:05.061764 kernel: ACPI: Added _OSI(Processor Device) May 13 23:58:05.061781 kernel: ACPI: Added _OSI(3.0 _SCP Extensions) May 13 23:58:05.061795 kernel: ACPI: Added _OSI(Processor Aggregator Device) May 13 23:58:05.061809 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded May 13 23:58:05.061823 kernel: ACPI: _OSC evaluation for CPUs failed, trying _PDC May 13 23:58:05.061837 kernel: ACPI: Interpreter enabled May 13 23:58:05.061851 kernel: ACPI: PM: (supports S0 S5) May 13 23:58:05.061865 kernel: ACPI: Using IOAPIC for interrupt routing May 13 23:58:05.061879 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug May 13 23:58:05.061893 kernel: PCI: Ignoring E820 reservations for host bridge windows May 13 23:58:05.061910 kernel: ACPI: Enabled 1 GPEs in block 00 to 0F May 13 23:58:05.061924 kernel: iommu: Default domain type: Translated May 13 23:58:05.061938 kernel: iommu: DMA domain TLB invalidation policy: lazy mode May 13 23:58:05.061952 kernel: efivars: Registered efivars operations May 13 23:58:05.061966 kernel: PCI: Using ACPI for IRQ routing May 13 23:58:05.061980 kernel: PCI: System does not support PCI May 13 23:58:05.061994 kernel: vgaarb: loaded May 13 23:58:05.062008 kernel: clocksource: Switched to clocksource hyperv_clocksource_tsc_page May 13 23:58:05.062022 kernel: VFS: Disk quotas dquot_6.6.0 May 13 23:58:05.062036 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) May 13 23:58:05.062052 kernel: pnp: PnP ACPI init May 13 23:58:05.062066 kernel: pnp: PnP ACPI: found 3 devices May 13 23:58:05.062081 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns May 13 23:58:05.062095 kernel: NET: Registered PF_INET protocol family May 13 23:58:05.062109 kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear) May 13 23:58:05.062123 kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear) May 13 23:58:05.062138 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) May 13 23:58:05.062152 kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear) May 13 23:58:05.062169 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) May 13 23:58:05.062191 kernel: TCP: Hash tables configured (established 65536 bind 65536) May 13 23:58:05.062206 kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:58:05.062220 kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear) May 13 23:58:05.062235 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family May 13 23:58:05.062249 kernel: NET: Registered PF_XDP protocol family May 13 23:58:05.062263 kernel: PCI: CLS 0 bytes, default 64 May 13 23:58:05.062277 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) May 13 23:58:05.062291 kernel: software IO TLB: mapped [mem 0x000000003b5c1000-0x000000003f5c1000] (64MB) May 13 23:58:05.062309 kernel: RAPL PMU: API unit is 2^-32 Joules, 0 fixed counters, 10737418240 ms ovfl timer May 13 23:58:05.062323 kernel: Initialise system trusted keyrings May 13 23:58:05.062337 kernel: workingset: timestamp_bits=39 max_order=21 bucket_order=0 May 13 23:58:05.062351 kernel: Key type asymmetric registered May 13 23:58:05.062365 kernel: Asymmetric key parser 'x509' registered May 13 23:58:05.062378 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 251) May 13 23:58:05.062392 kernel: io scheduler mq-deadline registered May 13 23:58:05.062407 kernel: io scheduler kyber registered May 13 23:58:05.062421 kernel: io scheduler bfq registered May 13 23:58:05.062435 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 May 13 23:58:05.062452 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled May 13 23:58:05.062466 kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A May 13 23:58:05.062480 kernel: 00:01: ttyS1 at I/O 0x2f8 (irq = 3, base_baud = 115200) is a 16550A May 13 23:58:05.062494 kernel: i8042: PNP: No PS/2 controller found. May 13 23:58:05.062673 kernel: rtc_cmos 00:02: registered as rtc0 May 13 23:58:05.062791 kernel: rtc_cmos 00:02: setting system clock to 2025-05-13T23:58:04 UTC (1747180684) May 13 23:58:05.062902 kernel: rtc_cmos 00:02: alarms up to one month, 114 bytes nvram May 13 23:58:05.062923 kernel: intel_pstate: CPU model not supported May 13 23:58:05.062938 kernel: efifb: probing for efifb May 13 23:58:05.062952 kernel: efifb: framebuffer at 0x40000000, using 3072k, total 3072k May 13 23:58:05.062967 kernel: efifb: mode is 1024x768x32, linelength=4096, pages=1 May 13 23:58:05.062981 kernel: efifb: scrolling: redraw May 13 23:58:05.062995 kernel: efifb: Truecolor: size=8:8:8:8, shift=24:16:8:0 May 13 23:58:05.063009 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:58:05.063023 kernel: fb0: EFI VGA frame buffer device May 13 23:58:05.063037 kernel: pstore: Using crash dump compression: deflate May 13 23:58:05.063054 kernel: pstore: Registered efi_pstore as persistent store backend May 13 23:58:05.063068 kernel: NET: Registered PF_INET6 protocol family May 13 23:58:05.063082 kernel: Segment Routing with IPv6 May 13 23:58:05.063096 kernel: In-situ OAM (IOAM) with IPv6 May 13 23:58:05.063109 kernel: NET: Registered PF_PACKET protocol family May 13 23:58:05.063122 kernel: Key type dns_resolver registered May 13 23:58:05.063141 kernel: IPI shorthand broadcast: enabled May 13 23:58:05.063166 kernel: sched_clock: Marking stable (787003200, 42860500)->(1026445300, -196581600) May 13 23:58:05.063200 kernel: registered taskstats version 1 May 13 23:58:05.063219 kernel: Loading compiled-in X.509 certificates May 13 23:58:05.063232 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.89-flatcar: 166efda032ca4d6e9037c569aca9b53585ee6f94' May 13 23:58:05.063246 kernel: Key type .fscrypt registered May 13 23:58:05.063259 kernel: Key type fscrypt-provisioning registered May 13 23:58:05.063273 kernel: ima: No TPM chip found, activating TPM-bypass! May 13 23:58:05.063287 kernel: ima: Allocated hash algorithm: sha1 May 13 23:58:05.063297 kernel: ima: No architecture policies found May 13 23:58:05.063310 kernel: clk: Disabling unused clocks May 13 23:58:05.063324 kernel: Freeing unused kernel image (initmem) memory: 43604K May 13 23:58:05.063341 kernel: Write protecting the kernel read-only data: 40960k May 13 23:58:05.063354 kernel: Freeing unused kernel image (rodata/data gap) memory: 1556K May 13 23:58:05.063365 kernel: Run /init as init process May 13 23:58:05.063378 kernel: with arguments: May 13 23:58:05.063391 kernel: /init May 13 23:58:05.063402 kernel: with environment: May 13 23:58:05.063420 kernel: HOME=/ May 13 23:58:05.063437 kernel: TERM=linux May 13 23:58:05.063449 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a May 13 23:58:05.063467 systemd[1]: Successfully made /usr/ read-only. May 13 23:58:05.063485 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:58:05.063499 systemd[1]: Detected virtualization microsoft. May 13 23:58:05.063512 systemd[1]: Detected architecture x86-64. May 13 23:58:05.063525 systemd[1]: Running in initrd. May 13 23:58:05.063539 systemd[1]: No hostname configured, using default hostname. May 13 23:58:05.063554 systemd[1]: Hostname set to . May 13 23:58:05.063571 systemd[1]: Initializing machine ID from random generator. May 13 23:58:05.063586 systemd[1]: Queued start job for default target initrd.target. May 13 23:58:05.063601 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:05.063616 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:05.063632 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... May 13 23:58:05.063648 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:58:05.063663 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... May 13 23:58:05.063682 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... May 13 23:58:05.063699 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... May 13 23:58:05.063714 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... May 13 23:58:05.063729 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:05.063745 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:05.063760 systemd[1]: Reached target paths.target - Path Units. May 13 23:58:05.063775 systemd[1]: Reached target slices.target - Slice Units. May 13 23:58:05.063790 systemd[1]: Reached target swap.target - Swaps. May 13 23:58:05.063808 systemd[1]: Reached target timers.target - Timer Units. May 13 23:58:05.063824 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:58:05.063839 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:58:05.063854 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). May 13 23:58:05.063870 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. May 13 23:58:05.063885 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:05.063900 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:58:05.063916 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:05.063931 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:58:05.063948 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... May 13 23:58:05.063962 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:58:05.063977 systemd[1]: Finished network-cleanup.service - Network Cleanup. May 13 23:58:05.063991 systemd[1]: Starting systemd-fsck-usr.service... May 13 23:58:05.064006 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:58:05.064021 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:58:05.064035 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:05.064076 systemd-journald[177]: Collecting audit messages is disabled. May 13 23:58:05.064111 systemd-journald[177]: Journal started May 13 23:58:05.064141 systemd-journald[177]: Runtime Journal (/run/log/journal/0a567ac8893945a1b3126cd0dc4f04a2) is 8M, max 158.7M, 150.7M free. May 13 23:58:05.063454 systemd-modules-load[178]: Inserted module 'overlay' May 13 23:58:05.070322 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:58:05.077890 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. May 13 23:58:05.080772 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:05.090087 systemd[1]: Finished systemd-fsck-usr.service. May 13 23:58:05.103360 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... May 13 23:58:05.108101 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:58:05.115373 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. May 13 23:58:05.118705 kernel: Bridge firewalling registered May 13 23:58:05.118163 systemd-modules-load[178]: Inserted module 'br_netfilter' May 13 23:58:05.124692 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:05.128069 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:58:05.133750 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:05.145397 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:05.152523 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. May 13 23:58:05.156158 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:05.166729 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:05.172025 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:05.175311 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... May 13 23:58:05.182978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:58:05.192300 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:58:05.213643 dracut-cmdline[209]: dracut-dracut-053 May 13 23:58:05.217773 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:05.223471 dracut-cmdline[209]: Using kernel command line parameters: rd.driver.pre=btrfs rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlyprintk=ttyS0,115200 flatcar.first_boot=detected flatcar.oem.id=azure flatcar.autologin verity.usrhash=8b3c5774a4242053287d41edc0d029958b7c22c131f7dd36b16a68182354e130 May 13 23:58:05.253179 systemd-resolved[210]: Positive Trust Anchors: May 13 23:58:05.253282 systemd-resolved[210]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:58:05.253344 systemd-resolved[210]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:58:05.277205 systemd-resolved[210]: Defaulting to hostname 'linux'. May 13 23:58:05.280615 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:58:05.286390 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:05.304202 kernel: SCSI subsystem initialized May 13 23:58:05.314199 kernel: Loading iSCSI transport class v2.0-870. May 13 23:58:05.325206 kernel: iscsi: registered transport (tcp) May 13 23:58:05.346275 kernel: iscsi: registered transport (qla4xxx) May 13 23:58:05.346363 kernel: QLogic iSCSI HBA Driver May 13 23:58:05.381176 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. May 13 23:58:05.386658 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... May 13 23:58:05.422589 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. May 13 23:58:05.422685 kernel: device-mapper: uevent: version 1.0.3 May 13 23:58:05.425924 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com May 13 23:58:05.466216 kernel: raid6: avx512x4 gen() 18615 MB/s May 13 23:58:05.485198 kernel: raid6: avx512x2 gen() 18786 MB/s May 13 23:58:05.504198 kernel: raid6: avx512x1 gen() 18852 MB/s May 13 23:58:05.523199 kernel: raid6: avx2x4 gen() 18801 MB/s May 13 23:58:05.542200 kernel: raid6: avx2x2 gen() 18687 MB/s May 13 23:58:05.562153 kernel: raid6: avx2x1 gen() 14119 MB/s May 13 23:58:05.562197 kernel: raid6: using algorithm avx512x1 gen() 18852 MB/s May 13 23:58:05.583519 kernel: raid6: .... xor() 26143 MB/s, rmw enabled May 13 23:58:05.583569 kernel: raid6: using avx512x2 recovery algorithm May 13 23:58:05.606210 kernel: xor: automatically using best checksumming function avx May 13 23:58:05.747210 kernel: Btrfs loaded, zoned=no, fsverity=no May 13 23:58:05.756442 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. May 13 23:58:05.760317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:05.781008 systemd-udevd[396]: Using default interface naming scheme 'v255'. May 13 23:58:05.786147 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:05.796490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... May 13 23:58:05.817944 dracut-pre-trigger[407]: rd.md=0: removing MD RAID activation May 13 23:58:05.843929 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:58:05.847308 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:58:05.895842 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:05.902314 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... May 13 23:58:05.927006 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. May 13 23:58:05.935203 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:58:05.938590 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:05.941531 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:58:05.949326 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... May 13 23:58:05.973478 kernel: cryptd: max_cpu_qlen set to 1000 May 13 23:58:05.991243 kernel: AVX2 version of gcm_enc/dec engaged. May 13 23:58:05.991293 kernel: AES CTR mode by8 optimization enabled May 13 23:58:05.997547 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. May 13 23:58:06.023199 kernel: hv_vmbus: Vmbus version:5.2 May 13 23:58:06.029533 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:58:06.032386 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:06.045896 kernel: pps_core: LinuxPPS API ver. 1 registered May 13 23:58:06.045959 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti May 13 23:58:06.046729 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:06.057366 kernel: PTP clock support registered May 13 23:58:06.059428 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:06.062431 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:06.073082 kernel: hv_utils: Registering HyperV Utility Driver May 13 23:58:06.073133 kernel: hv_vmbus: registering driver hv_utils May 13 23:58:06.083288 kernel: hv_vmbus: registering driver hv_storvsc May 13 23:58:06.073818 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:06.078515 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:06.082443 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:06.097290 kernel: hv_utils: Shutdown IC version 3.2 May 13 23:58:06.097334 kernel: hv_utils: Heartbeat IC version 3.0 May 13 23:58:06.452584 kernel: hv_utils: TimeSync IC version 4.0 May 13 23:58:06.452612 kernel: scsi host1: storvsc_host_t May 13 23:58:06.452554 systemd-resolved[210]: Clock change detected. Flushing caches. May 13 23:58:06.458082 kernel: scsi host0: storvsc_host_t May 13 23:58:06.462484 kernel: scsi 0:0:0:0: Direct-Access Msft Virtual Disk 1.0 PQ: 0 ANSI: 5 May 13 23:58:06.463658 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:06.463766 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:06.474789 kernel: scsi 0:0:0:2: CD-ROM Msft Virtual DVD-ROM 1.0 PQ: 0 ANSI: 0 May 13 23:58:06.480873 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:06.491360 kernel: hv_vmbus: registering driver hyperv_keyboard May 13 23:58:06.491388 kernel: hv_vmbus: registering driver hv_netvsc May 13 23:58:06.498454 kernel: input: AT Translated Set 2 keyboard as /devices/LNXSYSTM:00/LNXSYBUS:00/ACPI0004:00/VMBUS:00/d34b2567-b9b6-42b9-8778-0a4ec0b955bf/serio0/input/input0 May 13 23:58:06.501582 kernel: hid: raw HID events driver (C) Jiri Kosina May 13 23:58:06.509626 kernel: hv_vmbus: registering driver hid_hyperv May 13 23:58:06.509669 kernel: input: Microsoft Vmbus HID-compliant Mouse as /devices/0006:045E:0621.0001/input/input1 May 13 23:58:06.515858 kernel: hid-hyperv 0006:045E:0621.0001: input: VIRTUAL HID v0.01 Mouse [Microsoft Vmbus HID-compliant Mouse] on May 13 23:58:06.532490 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:06.543997 kernel: sr 0:0:0:2: [sr0] scsi-1 drive May 13 23:58:06.544329 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 May 13 23:58:06.545668 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... May 13 23:58:06.555456 kernel: sr 0:0:0:2: Attached scsi CD-ROM sr0 May 13 23:58:06.573441 kernel: sd 0:0:0:0: [sda] 63737856 512-byte logical blocks: (32.6 GB/30.4 GiB) May 13 23:58:06.573679 kernel: sd 0:0:0:0: [sda] 4096-byte physical blocks May 13 23:58:06.576441 kernel: sd 0:0:0:0: [sda] Write Protect is off May 13 23:58:06.576642 kernel: sd 0:0:0:0: [sda] Mode Sense: 0f 00 10 00 May 13 23:58:06.580496 kernel: sd 0:0:0:0: [sda] Write cache: disabled, read cache: enabled, supports DPO and FUA May 13 23:58:06.587446 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:58:06.587950 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:06.597149 kernel: sd 0:0:0:0: [sda] Attached SCSI disk May 13 23:58:06.713590 kernel: hv_netvsc 7ced8d40-4192-7ced-8d40-41927ced8d40 eth0: VF slot 1 added May 13 23:58:06.722649 kernel: hv_vmbus: registering driver hv_pci May 13 23:58:06.722702 kernel: hv_pci 3eb0c04a-eceb-48e3-afc7-ab774a573f09: PCI VMBus probing: Using version 0x10004 May 13 23:58:06.729594 kernel: hv_pci 3eb0c04a-eceb-48e3-afc7-ab774a573f09: PCI host bridge to bus eceb:00 May 13 23:58:06.729863 kernel: pci_bus eceb:00: root bus resource [mem 0xfe0000000-0xfe00fffff window] May 13 23:58:06.730426 kernel: pci_bus eceb:00: No busn resource found for root bus, will use [bus 00-ff] May 13 23:58:06.739434 kernel: pci eceb:00:02.0: [15b3:1016] type 00 class 0x020000 May 13 23:58:06.744447 kernel: pci eceb:00:02.0: reg 0x10: [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:58:06.747751 kernel: pci eceb:00:02.0: enabling Extended Tags May 13 23:58:06.758455 kernel: pci eceb:00:02.0: 0.000 Gb/s available PCIe bandwidth, limited by Unknown x0 link at eceb:00:02.0 (capable of 63.008 Gb/s with 8.0 GT/s PCIe x8 link) May 13 23:58:06.765177 kernel: pci_bus eceb:00: busn_res: [bus 00-ff] end is updated to 00 May 13 23:58:06.765495 kernel: pci eceb:00:02.0: BAR 0: assigned [mem 0xfe0000000-0xfe00fffff 64bit pref] May 13 23:58:06.927107 kernel: mlx5_core eceb:00:02.0: enabling device (0000 -> 0002) May 13 23:58:06.931433 kernel: mlx5_core eceb:00:02.0: firmware version: 14.30.5000 May 13 23:58:07.150244 kernel: hv_netvsc 7ced8d40-4192-7ced-8d40-41927ced8d40 eth0: VF registering: eth1 May 13 23:58:07.150603 kernel: mlx5_core eceb:00:02.0 eth1: joined to eth0 May 13 23:58:07.155484 kernel: mlx5_core eceb:00:02.0: MLX5E: StrdRq(0) RqSz(1024) StrdSz(256) RxCqeCmprss(0 basic) May 13 23:58:07.163426 kernel: mlx5_core eceb:00:02.0 enP60651s1: renamed from eth1 May 13 23:58:11.561442 kernel: BTRFS: device label OEM devid 1 transid 15 /dev/sda6 scanned by (udev-worker) (462) May 13 23:58:11.581039 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:58:11.614958 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Virtual_Disk ROOT. May 13 23:58:11.631253 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Virtual_Disk EFI-SYSTEM. May 13 23:58:12.002435 kernel: BTRFS: device fsid d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 devid 1 transid 39 /dev/sda3 scanned by (udev-worker) (447) May 13 23:58:12.019767 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Virtual_Disk USR-A. May 13 23:58:12.023203 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Virtual_Disk USR-A. May 13 23:58:12.033547 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... May 13 23:58:13.064479 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 May 13 23:58:13.065178 disk-uuid[599]: The operation has completed successfully. May 13 23:58:13.147502 systemd[1]: disk-uuid.service: Deactivated successfully. May 13 23:58:13.147613 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. May 13 23:58:13.192748 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... May 13 23:58:13.210103 sh[688]: Success May 13 23:58:13.245870 kernel: device-mapper: verity: sha256 using implementation "sha256-avx2" May 13 23:58:13.610092 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. May 13 23:58:13.619504 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... May 13 23:58:13.629189 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. May 13 23:58:13.642427 kernel: BTRFS info (device dm-0): first mount of filesystem d2fbd39e-42cb-4ccb-87ec-99f56cfe77f8 May 13 23:58:13.642475 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:13.647935 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead May 13 23:58:13.650714 kernel: BTRFS info (device dm-0): disabling log replay at mount time May 13 23:58:13.653237 kernel: BTRFS info (device dm-0): using free space tree May 13 23:58:14.232232 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. May 13 23:58:14.237661 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. May 13 23:58:14.243186 systemd[1]: Starting ignition-setup.service - Ignition (setup)... May 13 23:58:14.255673 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... May 13 23:58:14.290731 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:14.290802 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:14.290824 kernel: BTRFS info (device sda6): using free space tree May 13 23:58:14.323617 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:58:14.330478 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:14.333792 systemd[1]: Finished ignition-setup.service - Ignition (setup). May 13 23:58:14.342765 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... May 13 23:58:14.348999 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:58:14.357099 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:58:14.393066 systemd-networkd[869]: lo: Link UP May 13 23:58:14.393076 systemd-networkd[869]: lo: Gained carrier May 13 23:58:14.395343 systemd-networkd[869]: Enumeration completed May 13 23:58:14.395578 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:58:14.397801 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:14.397806 systemd-networkd[869]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:14.399652 systemd[1]: Reached target network.target - Network. May 13 23:58:14.454430 kernel: mlx5_core eceb:00:02.0 enP60651s1: Link up May 13 23:58:14.484443 kernel: hv_netvsc 7ced8d40-4192-7ced-8d40-41927ced8d40 eth0: Data path switched to VF: enP60651s1 May 13 23:58:14.485455 systemd-networkd[869]: enP60651s1: Link UP May 13 23:58:14.485619 systemd-networkd[869]: eth0: Link UP May 13 23:58:14.485818 systemd-networkd[869]: eth0: Gained carrier May 13 23:58:14.485832 systemd-networkd[869]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:14.492614 systemd-networkd[869]: enP60651s1: Gained carrier May 13 23:58:14.520462 systemd-networkd[869]: eth0: DHCPv4 address 10.200.8.49/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:58:15.782569 ignition[864]: Ignition 2.20.0 May 13 23:58:15.782582 ignition[864]: Stage: fetch-offline May 13 23:58:15.784128 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:58:15.782621 ignition[864]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:15.782631 ignition[864]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:15.782756 ignition[864]: parsed url from cmdline: "" May 13 23:58:15.782761 ignition[864]: no config URL provided May 13 23:58:15.782769 ignition[864]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:58:15.782780 ignition[864]: no config at "/usr/lib/ignition/user.ign" May 13 23:58:15.782787 ignition[864]: failed to fetch config: resource requires networking May 13 23:58:15.783017 ignition[864]: Ignition finished successfully May 13 23:58:15.809106 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... May 13 23:58:15.836220 ignition[878]: Ignition 2.20.0 May 13 23:58:15.836233 ignition[878]: Stage: fetch May 13 23:58:15.836455 ignition[878]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:15.836467 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:15.836580 ignition[878]: parsed url from cmdline: "" May 13 23:58:15.836583 ignition[878]: no config URL provided May 13 23:58:15.836588 ignition[878]: reading system config file "/usr/lib/ignition/user.ign" May 13 23:58:15.836595 ignition[878]: no config at "/usr/lib/ignition/user.ign" May 13 23:58:15.836618 ignition[878]: GET http://169.254.169.254/metadata/instance/compute/userData?api-version=2021-01-01&format=text: attempt #1 May 13 23:58:15.863512 systemd-networkd[869]: eth0: Gained IPv6LL May 13 23:58:15.923586 ignition[878]: GET result: OK May 13 23:58:15.923706 ignition[878]: config has been read from IMDS userdata May 13 23:58:15.923739 ignition[878]: parsing config with SHA512: 44ecb17497e0b98ed70002efabd8954a97ceef25bcff4175c323910d6a2accf3f1d2ef2dd79700896a2170a0a45a3322ebe7ebd8f5c2ec2067f664efad0c4633 May 13 23:58:15.929349 unknown[878]: fetched base config from "system" May 13 23:58:15.929361 unknown[878]: fetched base config from "system" May 13 23:58:15.929865 ignition[878]: fetch: fetch complete May 13 23:58:15.929369 unknown[878]: fetched user config from "azure" May 13 23:58:15.929870 ignition[878]: fetch: fetch passed May 13 23:58:15.931462 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). May 13 23:58:15.929921 ignition[878]: Ignition finished successfully May 13 23:58:15.935596 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... May 13 23:58:15.963131 ignition[884]: Ignition 2.20.0 May 13 23:58:15.963143 ignition[884]: Stage: kargs May 13 23:58:15.963345 ignition[884]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:15.963358 ignition[884]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:15.964204 ignition[884]: kargs: kargs passed May 13 23:58:15.964248 ignition[884]: Ignition finished successfully May 13 23:58:15.971674 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). May 13 23:58:15.978536 systemd[1]: Starting ignition-disks.service - Ignition (disks)... May 13 23:58:16.005737 ignition[891]: Ignition 2.20.0 May 13 23:58:16.005748 ignition[891]: Stage: disks May 13 23:58:16.007807 systemd[1]: Finished ignition-disks.service - Ignition (disks). May 13 23:58:16.005957 ignition[891]: no configs at "/usr/lib/ignition/base.d" May 13 23:58:16.005972 ignition[891]: no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:16.006868 ignition[891]: disks: disks passed May 13 23:58:16.006910 ignition[891]: Ignition finished successfully May 13 23:58:16.021346 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. May 13 23:58:16.024076 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. May 13 23:58:16.027200 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:58:16.037912 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:58:16.040726 systemd[1]: Reached target basic.target - Basic System. May 13 23:58:16.048885 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... May 13 23:58:16.104029 systemd-fsck[899]: ROOT: clean, 14/7326000 files, 477710/7359488 blocks May 13 23:58:16.108899 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. May 13 23:58:16.116298 systemd[1]: Mounting sysroot.mount - /sysroot... May 13 23:58:16.211427 kernel: EXT4-fs (sda9): mounted filesystem c413e98b-da35-46b1-9852-45706e1b1f52 r/w with ordered data mode. Quota mode: none. May 13 23:58:16.211861 systemd[1]: Mounted sysroot.mount - /sysroot. May 13 23:58:16.216435 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. May 13 23:58:16.279021 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:58:16.283458 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... May 13 23:58:16.291675 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... May 13 23:58:16.295685 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). May 13 23:58:16.295719 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:58:16.303540 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. May 13 23:58:16.320668 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... May 13 23:58:16.329455 kernel: BTRFS: device label OEM devid 1 transid 16 /dev/sda6 scanned by mount (910) May 13 23:58:16.337288 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:16.337350 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:16.339289 kernel: BTRFS info (device sda6): using free space tree May 13 23:58:16.344425 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:58:16.345802 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:58:16.503584 systemd-networkd[869]: enP60651s1: Gained IPv6LL May 13 23:58:17.422217 initrd-setup-root[936]: cut: /sysroot/etc/passwd: No such file or directory May 13 23:58:17.455125 initrd-setup-root[943]: cut: /sysroot/etc/group: No such file or directory May 13 23:58:17.476677 coreos-metadata[912]: May 13 23:58:17.476 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:58:17.480882 coreos-metadata[912]: May 13 23:58:17.478 INFO Fetch successful May 13 23:58:17.480882 coreos-metadata[912]: May 13 23:58:17.478 INFO Fetching http://169.254.169.254/metadata/instance/compute/name?api-version=2017-08-01&format=text: Attempt #1 May 13 23:58:17.489039 coreos-metadata[912]: May 13 23:58:17.487 INFO Fetch successful May 13 23:58:17.491401 coreos-metadata[912]: May 13 23:58:17.489 INFO wrote hostname ci-4284.0.0-n-b62cb48025 to /sysroot/etc/hostname May 13 23:58:17.490306 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:58:17.504221 initrd-setup-root[955]: cut: /sysroot/etc/shadow: No such file or directory May 13 23:58:17.588302 initrd-setup-root[962]: cut: /sysroot/etc/gshadow: No such file or directory May 13 23:58:22.176490 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. May 13 23:58:22.183396 systemd[1]: Starting ignition-mount.service - Ignition (mount)... May 13 23:58:22.194497 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... May 13 23:58:22.202755 systemd[1]: sysroot-oem.mount: Deactivated successfully. May 13 23:58:22.205791 kernel: BTRFS info (device sda6): last unmount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:22.236226 ignition[1029]: INFO : Ignition 2.20.0 May 13 23:58:22.236226 ignition[1029]: INFO : Stage: mount May 13 23:58:22.236226 ignition[1029]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:22.236226 ignition[1029]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:22.238567 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. May 13 23:58:22.240073 ignition[1029]: INFO : mount: mount passed May 13 23:58:22.240073 ignition[1029]: INFO : Ignition finished successfully May 13 23:58:22.255316 systemd[1]: Finished ignition-mount.service - Ignition (mount). May 13 23:58:22.260402 systemd[1]: Starting ignition-files.service - Ignition (files)... May 13 23:58:22.276619 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... May 13 23:58:22.302472 kernel: BTRFS: device label OEM devid 1 transid 17 /dev/sda6 scanned by mount (1042) May 13 23:58:22.309051 kernel: BTRFS info (device sda6): first mount of filesystem c0e200fb-7321-4d2d-86ff-b28bdae5fafc May 13 23:58:22.309144 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm May 13 23:58:22.311601 kernel: BTRFS info (device sda6): using free space tree May 13 23:58:22.319437 kernel: BTRFS info (device sda6): auto enabling async discard May 13 23:58:22.363725 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. May 13 23:58:22.398789 ignition[1060]: INFO : Ignition 2.20.0 May 13 23:58:22.398789 ignition[1060]: INFO : Stage: files May 13 23:58:22.402901 ignition[1060]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:22.402901 ignition[1060]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:22.402901 ignition[1060]: DEBUG : files: compiled without relabeling support, skipping May 13 23:58:22.411519 ignition[1060]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" May 13 23:58:22.411519 ignition[1060]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" May 13 23:58:22.482872 ignition[1060]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" May 13 23:58:22.486952 ignition[1060]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" May 13 23:58:22.486952 ignition[1060]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" May 13 23:58:22.483492 unknown[1060]: wrote ssh authorized keys file for user: core May 13 23:58:22.727823 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:58:22.732860 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 May 13 23:58:22.781999 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK May 13 23:58:22.922054 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:22.927224 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://github.com/flatcar/sysext-bakery/releases/download/latest/kubernetes-v1.30.1-x86-64.raw: attempt #1 May 13 23:58:23.421546 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK May 13 23:58:23.709220 ignition[1060]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.30.1-x86-64.raw" May 13 23:58:23.709220 ignition[1060]: INFO : files: op(b): [started] processing unit "prepare-helm.service" May 13 23:58:23.721289 ignition[1060]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:58:23.726101 ignition[1060]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" May 13 23:58:23.726101 ignition[1060]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" May 13 23:58:23.726101 ignition[1060]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" May 13 23:58:23.726101 ignition[1060]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" May 13 23:58:23.726101 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" May 13 23:58:23.726101 ignition[1060]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" May 13 23:58:23.726101 ignition[1060]: INFO : files: files passed May 13 23:58:23.726101 ignition[1060]: INFO : Ignition finished successfully May 13 23:58:23.723093 systemd[1]: Finished ignition-files.service - Ignition (files). May 13 23:58:23.731555 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... May 13 23:58:23.761141 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... May 13 23:58:23.769685 systemd[1]: ignition-quench.service: Deactivated successfully. May 13 23:58:23.769789 systemd[1]: Finished ignition-quench.service - Ignition (record completion). May 13 23:58:23.790132 initrd-setup-root-after-ignition[1089]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:23.790132 initrd-setup-root-after-ignition[1089]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:23.802337 initrd-setup-root-after-ignition[1093]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory May 13 23:58:23.794060 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:58:23.797966 systemd[1]: Reached target ignition-complete.target - Ignition Complete. May 13 23:58:23.807479 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... May 13 23:58:23.850610 systemd[1]: initrd-parse-etc.service: Deactivated successfully. May 13 23:58:23.850727 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. May 13 23:58:23.859275 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. May 13 23:58:23.862047 systemd[1]: Reached target initrd.target - Initrd Default Target. May 13 23:58:23.866933 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. May 13 23:58:23.869560 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... May 13 23:58:23.890655 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:58:23.898221 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... May 13 23:58:23.918141 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:23.924221 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:23.924432 systemd[1]: Stopped target timers.target - Timer Units. May 13 23:58:23.924819 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. May 13 23:58:23.924930 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. May 13 23:58:23.925616 systemd[1]: Stopped target initrd.target - Initrd Default Target. May 13 23:58:23.926456 systemd[1]: Stopped target basic.target - Basic System. May 13 23:58:23.926898 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. May 13 23:58:23.927323 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. May 13 23:58:23.927749 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. May 13 23:58:23.928170 systemd[1]: Stopped target remote-fs.target - Remote File Systems. May 13 23:58:23.928588 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. May 13 23:58:23.929093 systemd[1]: Stopped target sysinit.target - System Initialization. May 13 23:58:23.929502 systemd[1]: Stopped target local-fs.target - Local File Systems. May 13 23:58:23.929895 systemd[1]: Stopped target swap.target - Swaps. May 13 23:58:23.930291 systemd[1]: dracut-pre-mount.service: Deactivated successfully. May 13 23:58:23.930399 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. May 13 23:58:23.931144 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:23.931610 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:23.931971 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. May 13 23:58:23.970016 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:23.977748 systemd[1]: dracut-initqueue.service: Deactivated successfully. May 13 23:58:23.977893 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. May 13 23:58:23.986160 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. May 13 23:58:23.986327 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. May 13 23:58:24.034923 systemd[1]: ignition-files.service: Deactivated successfully. May 13 23:58:24.035166 systemd[1]: Stopped ignition-files.service - Ignition (files). May 13 23:58:24.039835 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. May 13 23:58:24.039992 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. May 13 23:58:24.054510 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... May 13 23:58:24.059257 systemd[1]: kmod-static-nodes.service: Deactivated successfully. May 13 23:58:24.060196 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:24.075602 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... May 13 23:58:24.078219 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. May 13 23:58:24.078521 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:24.082299 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. May 13 23:58:24.082532 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. May 13 23:58:24.110114 ignition[1113]: INFO : Ignition 2.20.0 May 13 23:58:24.110114 ignition[1113]: INFO : Stage: umount May 13 23:58:24.110114 ignition[1113]: INFO : no configs at "/usr/lib/ignition/base.d" May 13 23:58:24.110114 ignition[1113]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/azure" May 13 23:58:24.110114 ignition[1113]: INFO : umount: umount passed May 13 23:58:24.110114 ignition[1113]: INFO : Ignition finished successfully May 13 23:58:24.091033 systemd[1]: initrd-cleanup.service: Deactivated successfully. May 13 23:58:24.091163 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. May 13 23:58:24.111773 systemd[1]: ignition-mount.service: Deactivated successfully. May 13 23:58:24.111886 systemd[1]: Stopped ignition-mount.service - Ignition (mount). May 13 23:58:24.117450 systemd[1]: sysroot-boot.mount: Deactivated successfully. May 13 23:58:24.117847 systemd[1]: ignition-disks.service: Deactivated successfully. May 13 23:58:24.117890 systemd[1]: Stopped ignition-disks.service - Ignition (disks). May 13 23:58:24.121430 systemd[1]: ignition-kargs.service: Deactivated successfully. May 13 23:58:24.121484 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). May 13 23:58:24.131842 systemd[1]: ignition-fetch.service: Deactivated successfully. May 13 23:58:24.131911 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). May 13 23:58:24.136452 systemd[1]: Stopped target network.target - Network. May 13 23:58:24.140781 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. May 13 23:58:24.140882 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). May 13 23:58:24.143794 systemd[1]: Stopped target paths.target - Path Units. May 13 23:58:24.143948 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. May 13 23:58:24.151446 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:24.154574 systemd[1]: Stopped target slices.target - Slice Units. May 13 23:58:24.156902 systemd[1]: Stopped target sockets.target - Socket Units. May 13 23:58:24.196710 systemd[1]: iscsid.socket: Deactivated successfully. May 13 23:58:24.196777 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. May 13 23:58:24.201524 systemd[1]: iscsiuio.socket: Deactivated successfully. May 13 23:58:24.201578 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. May 13 23:58:24.211785 systemd[1]: ignition-setup.service: Deactivated successfully. May 13 23:58:24.211878 systemd[1]: Stopped ignition-setup.service - Ignition (setup). May 13 23:58:24.218896 systemd[1]: ignition-setup-pre.service: Deactivated successfully. May 13 23:58:24.218967 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. May 13 23:58:24.224608 systemd[1]: Stopping systemd-networkd.service - Network Configuration... May 13 23:58:24.232549 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... May 13 23:58:24.238665 systemd[1]: systemd-networkd.service: Deactivated successfully. May 13 23:58:24.239370 systemd[1]: Stopped systemd-networkd.service - Network Configuration. May 13 23:58:24.247435 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. May 13 23:58:24.250632 systemd[1]: systemd-resolved.service: Deactivated successfully. May 13 23:58:24.250768 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. May 13 23:58:24.259860 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. May 13 23:58:24.260758 systemd[1]: systemd-networkd.socket: Deactivated successfully. May 13 23:58:24.260820 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:24.273359 systemd[1]: Stopping network-cleanup.service - Network Cleanup... May 13 23:58:24.275738 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. May 13 23:58:24.275815 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. May 13 23:58:24.281493 systemd[1]: systemd-sysctl.service: Deactivated successfully. May 13 23:58:24.283980 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:24.294357 systemd[1]: systemd-modules-load.service: Deactivated successfully. May 13 23:58:24.294479 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. May 13 23:58:24.302229 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. May 13 23:58:24.302298 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:24.310955 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:24.314765 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. May 13 23:58:24.314827 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:24.334365 systemd[1]: systemd-udevd.service: Deactivated successfully. May 13 23:58:24.334617 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:24.340857 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. May 13 23:58:24.340898 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. May 13 23:58:24.351469 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. May 13 23:58:24.351541 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:24.356640 systemd[1]: dracut-pre-udev.service: Deactivated successfully. May 13 23:58:24.356699 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. May 13 23:58:24.366518 systemd[1]: dracut-cmdline.service: Deactivated successfully. May 13 23:58:24.366596 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. May 13 23:58:24.371521 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. May 13 23:58:24.371594 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. May 13 23:58:24.384429 kernel: hv_netvsc 7ced8d40-4192-7ced-8d40-41927ced8d40 eth0: Data path switched from VF: enP60651s1 May 13 23:58:24.386196 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... May 13 23:58:24.389262 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. May 13 23:58:24.389354 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:24.395716 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:24.395790 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:24.410954 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. May 13 23:58:24.411040 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:24.411872 systemd[1]: network-cleanup.service: Deactivated successfully. May 13 23:58:24.411967 systemd[1]: Stopped network-cleanup.service - Network Cleanup. May 13 23:58:24.417711 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. May 13 23:58:24.417819 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. May 13 23:58:26.925672 systemd[1]: sysroot-boot.service: Deactivated successfully. May 13 23:58:26.925831 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. May 13 23:58:26.926254 systemd[1]: Reached target initrd-switch-root.target - Switch Root. May 13 23:58:26.926951 systemd[1]: initrd-setup-root.service: Deactivated successfully. May 13 23:58:26.927015 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. May 13 23:58:26.930547 systemd[1]: Starting initrd-switch-root.service - Switch Root... May 13 23:58:27.020418 systemd[1]: Switching root. May 13 23:58:27.076805 systemd-journald[177]: Journal stopped May 13 23:58:35.293854 systemd-journald[177]: Received SIGTERM from PID 1 (systemd). May 13 23:58:35.293897 kernel: SELinux: policy capability network_peer_controls=1 May 13 23:58:35.293923 kernel: SELinux: policy capability open_perms=1 May 13 23:58:35.293940 kernel: SELinux: policy capability extended_socket_class=1 May 13 23:58:35.293953 kernel: SELinux: policy capability always_check_network=0 May 13 23:58:35.293965 kernel: SELinux: policy capability cgroup_seclabel=1 May 13 23:58:35.293980 kernel: SELinux: policy capability nnp_nosuid_transition=1 May 13 23:58:35.293993 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 May 13 23:58:35.294008 kernel: SELinux: policy capability ioctl_skip_cloexec=0 May 13 23:58:35.294021 kernel: audit: type=1403 audit(1747180710.235:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 May 13 23:58:35.294035 systemd[1]: Successfully loaded SELinux policy in 64.503ms. May 13 23:58:35.294050 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.066ms. May 13 23:58:35.294066 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) May 13 23:58:35.294081 systemd[1]: Detected virtualization microsoft. May 13 23:58:35.294100 systemd[1]: Detected architecture x86-64. May 13 23:58:35.294116 systemd[1]: Detected first boot. May 13 23:58:35.294131 systemd[1]: Hostname set to . May 13 23:58:35.294147 systemd[1]: Initializing machine ID from random generator. May 13 23:58:35.294164 zram_generator::config[1158]: No configuration found. May 13 23:58:35.294183 kernel: Guest personality initialized and is inactive May 13 23:58:35.294199 kernel: VMCI host device registered (name=vmci, major=10, minor=124) May 13 23:58:35.294214 kernel: Initialized host personality May 13 23:58:35.294229 kernel: NET: Registered PF_VSOCK protocol family May 13 23:58:35.294244 systemd[1]: Populated /etc with preset unit settings. May 13 23:58:35.294263 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. May 13 23:58:35.294278 systemd[1]: initrd-switch-root.service: Deactivated successfully. May 13 23:58:35.294295 systemd[1]: Stopped initrd-switch-root.service - Switch Root. May 13 23:58:35.294311 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. May 13 23:58:35.294330 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. May 13 23:58:35.294347 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. May 13 23:58:35.294362 systemd[1]: Created slice system-getty.slice - Slice /system/getty. May 13 23:58:35.294377 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. May 13 23:58:35.294393 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. May 13 23:58:35.294423 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. May 13 23:58:35.294440 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. May 13 23:58:35.294460 systemd[1]: Created slice user.slice - User and Session Slice. May 13 23:58:35.294478 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. May 13 23:58:35.294494 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. May 13 23:58:35.294511 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. May 13 23:58:35.294527 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. May 13 23:58:35.294549 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. May 13 23:58:35.294566 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... May 13 23:58:35.294583 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... May 13 23:58:35.294604 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). May 13 23:58:35.294622 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. May 13 23:58:35.294639 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. May 13 23:58:35.294656 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. May 13 23:58:35.294673 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. May 13 23:58:35.294690 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. May 13 23:58:35.294706 systemd[1]: Reached target remote-fs.target - Remote File Systems. May 13 23:58:35.294727 systemd[1]: Reached target slices.target - Slice Units. May 13 23:58:35.294744 systemd[1]: Reached target swap.target - Swaps. May 13 23:58:35.294760 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. May 13 23:58:35.294778 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. May 13 23:58:35.294796 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. May 13 23:58:35.294814 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. May 13 23:58:35.294834 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. May 13 23:58:35.294852 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. May 13 23:58:35.294869 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. May 13 23:58:35.294886 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... May 13 23:58:35.294905 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... May 13 23:58:35.294922 systemd[1]: Mounting media.mount - External Media Directory... May 13 23:58:35.294940 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:35.294961 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... May 13 23:58:35.294979 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... May 13 23:58:35.294997 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... May 13 23:58:35.295016 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). May 13 23:58:35.295034 systemd[1]: Reached target machines.target - Containers. May 13 23:58:35.295053 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... May 13 23:58:35.295072 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:35.295090 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... May 13 23:58:35.295107 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... May 13 23:58:35.295128 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:35.295145 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:35.295163 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:35.295182 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... May 13 23:58:35.295199 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:35.295218 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). May 13 23:58:35.295236 systemd[1]: systemd-fsck-root.service: Deactivated successfully. May 13 23:58:35.295254 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. May 13 23:58:35.295276 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. May 13 23:58:35.295295 systemd[1]: Stopped systemd-fsck-usr.service. May 13 23:58:35.295314 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:35.295332 systemd[1]: Starting systemd-journald.service - Journal Service... May 13 23:58:35.295350 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... May 13 23:58:35.295367 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... May 13 23:58:35.295383 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... May 13 23:58:35.295401 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... May 13 23:58:35.295448 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... May 13 23:58:35.295466 systemd[1]: verity-setup.service: Deactivated successfully. May 13 23:58:35.295482 systemd[1]: Stopped verity-setup.service. May 13 23:58:35.295498 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:35.295515 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. May 13 23:58:35.295533 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. May 13 23:58:35.295551 systemd[1]: Mounted media.mount - External Media Directory. May 13 23:58:35.295568 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. May 13 23:58:35.295588 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. May 13 23:58:35.295605 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. May 13 23:58:35.295622 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. May 13 23:58:35.295639 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:35.295655 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:35.295671 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:35.295687 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:35.295705 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. May 13 23:58:35.295721 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. May 13 23:58:35.295742 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. May 13 23:58:35.295793 systemd-journald[1241]: Collecting audit messages is disabled. May 13 23:58:35.296498 systemd[1]: Reached target network-pre.target - Preparation for Network. May 13 23:58:35.296521 systemd-journald[1241]: Journal started May 13 23:58:35.296555 systemd-journald[1241]: Runtime Journal (/run/log/journal/b9b2c68d6acd4c8e828dd7bb164968e4) is 8M, max 158.7M, 150.7M free. May 13 23:58:34.583920 systemd[1]: Queued start job for default target multi-user.target. May 13 23:58:34.596655 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. May 13 23:58:34.597032 systemd[1]: systemd-journald.service: Deactivated successfully. May 13 23:58:35.311834 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). May 13 23:58:35.318436 systemd[1]: Reached target local-fs.target - Local File Systems. May 13 23:58:35.318475 kernel: loop: module loaded May 13 23:58:35.329424 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. May 13 23:58:35.339525 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... May 13 23:58:35.351453 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... May 13 23:58:35.364423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:35.374424 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... May 13 23:58:35.386583 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:35.398821 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... May 13 23:58:35.407426 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... May 13 23:58:35.418500 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... May 13 23:58:35.424430 systemd[1]: Started systemd-journald.service - Journal Service. May 13 23:58:35.428473 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:35.428668 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:35.431754 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. May 13 23:58:35.435031 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. May 13 23:58:35.447573 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... May 13 23:58:35.450561 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:35.454096 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... May 13 23:58:35.469199 udevadm[1276]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. May 13 23:58:35.478740 systemd[1]: modprobe@configfs.service: Deactivated successfully. May 13 23:58:35.478938 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. May 13 23:58:35.483510 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... May 13 23:58:35.497380 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. May 13 23:58:35.524225 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. May 13 23:58:35.537430 kernel: fuse: init (API version 7.39) May 13 23:58:35.538268 systemd[1]: modprobe@fuse.service: Deactivated successfully. May 13 23:58:35.538519 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. May 13 23:58:35.542625 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... May 13 23:58:35.557561 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. May 13 23:58:35.663533 systemd-journald[1241]: Time spent on flushing to /var/log/journal/b9b2c68d6acd4c8e828dd7bb164968e4 is 32.762ms for 967 entries. May 13 23:58:35.663533 systemd-journald[1241]: System Journal (/var/log/journal/b9b2c68d6acd4c8e828dd7bb164968e4) is 8M, max 2.6G, 2.6G free. May 13 23:58:36.130755 systemd-journald[1241]: Received client request to flush runtime journal. May 13 23:58:36.130818 kernel: ACPI: bus type drm_connector registered May 13 23:58:36.130840 kernel: loop0: detected capacity change from 0 to 151640 May 13 23:58:35.707434 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:35.707617 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:36.075484 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. May 13 23:58:36.084697 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. May 13 23:58:36.095871 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... May 13 23:58:36.101000 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. May 13 23:58:36.133824 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. May 13 23:58:36.137800 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. May 13 23:58:36.144573 systemd[1]: Starting systemd-sysusers.service - Create System Users... May 13 23:58:36.681949 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. May 13 23:58:36.683485 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. May 13 23:58:37.539655 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher May 13 23:58:37.561448 kernel: loop1: detected capacity change from 0 to 210664 May 13 23:58:37.583329 systemd[1]: Finished systemd-sysusers.service - Create System Users. May 13 23:58:37.588687 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... May 13 23:58:37.598440 kernel: loop2: detected capacity change from 0 to 109808 May 13 23:58:37.937234 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. May 13 23:58:37.937811 systemd-tmpfiles[1320]: ACLs are not supported, ignoring. May 13 23:58:37.944505 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. May 13 23:58:38.567459 kernel: loop3: detected capacity change from 0 to 28424 May 13 23:58:39.139442 kernel: loop4: detected capacity change from 0 to 151640 May 13 23:58:39.161816 kernel: loop5: detected capacity change from 0 to 210664 May 13 23:58:39.179656 kernel: loop6: detected capacity change from 0 to 109808 May 13 23:58:39.195582 kernel: loop7: detected capacity change from 0 to 28424 May 13 23:58:39.204513 (sd-merge)[1325]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-azure'. May 13 23:58:39.205158 (sd-merge)[1325]: Merged extensions into '/usr'. May 13 23:58:39.213781 systemd[1]: Reload requested from client PID 1258 ('systemd-sysext') (unit systemd-sysext.service)... May 13 23:58:39.213802 systemd[1]: Reloading... May 13 23:58:39.293474 zram_generator::config[1349]: No configuration found. May 13 23:58:39.509353 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:39.610375 systemd[1]: Reloading finished in 395 ms. May 13 23:58:39.629025 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. May 13 23:58:39.632807 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. May 13 23:58:39.649549 systemd[1]: Starting ensure-sysext.service... May 13 23:58:39.655558 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... May 13 23:58:39.664571 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... May 13 23:58:39.693594 systemd[1]: Reload requested from client PID 1412 ('systemctl') (unit ensure-sysext.service)... May 13 23:58:39.693613 systemd[1]: Reloading... May 13 23:58:39.696863 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. May 13 23:58:39.697241 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. May 13 23:58:39.702291 systemd-tmpfiles[1413]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. May 13 23:58:39.702725 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. May 13 23:58:39.702812 systemd-tmpfiles[1413]: ACLs are not supported, ignoring. May 13 23:58:39.723603 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:39.723620 systemd-tmpfiles[1413]: Skipping /boot May 13 23:58:39.727272 systemd-udevd[1414]: Using default interface naming scheme 'v255'. May 13 23:58:39.748315 systemd-tmpfiles[1413]: Detected autofs mount point /boot during canonicalization of boot. May 13 23:58:39.748503 systemd-tmpfiles[1413]: Skipping /boot May 13 23:58:39.825483 zram_generator::config[1445]: No configuration found. May 13 23:58:40.103427 kernel: mousedev: PS/2 mouse device common for all mice May 13 23:58:40.113454 kernel: hv_vmbus: registering driver hv_balloon May 13 23:58:40.118515 kernel: hv_balloon: Using Dynamic Memory protocol version 2.0 May 13 23:58:40.118579 kernel: hv_vmbus: registering driver hyperv_fb May 13 23:58:40.123663 kernel: hyperv_fb: Synthvid Version major 3, minor 5 May 13 23:58:40.129490 kernel: hyperv_fb: Screen resolution: 1024x768, Color depth: 32, Frame buffer size: 8388608 May 13 23:58:40.137858 kernel: Console: switching to colour dummy device 80x25 May 13 23:58:40.140427 kernel: Console: switching to colour frame buffer device 128x48 May 13 23:58:40.143228 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:40.403427 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1478) May 13 23:58:40.425264 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. May 13 23:58:40.425954 systemd[1]: Reloading finished in 731 ms. May 13 23:58:40.453623 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. May 13 23:58:40.470291 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. May 13 23:58:40.571920 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:40.576887 systemd[1]: Starting audit-rules.service - Load Audit Rules... May 13 23:58:40.591049 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... May 13 23:58:40.594706 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. May 13 23:58:40.596380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... May 13 23:58:40.618324 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... May 13 23:58:40.629724 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... May 13 23:58:40.640582 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... May 13 23:58:40.643555 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. May 13 23:58:40.643802 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). May 13 23:58:40.648511 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... May 13 23:58:40.659872 systemd[1]: Starting systemd-networkd.service - Network Configuration... May 13 23:58:40.666435 kernel: kvm_intel: Using Hyper-V Enlightened VMCS May 13 23:58:40.666509 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... May 13 23:58:40.672915 systemd[1]: Reached target time-set.target - System Time Set. May 13 23:58:40.687829 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... May 13 23:58:40.702872 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:40.705918 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). May 13 23:58:40.720234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. May 13 23:58:40.721292 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. May 13 23:58:40.725991 systemd[1]: modprobe@drm.service: Deactivated successfully. May 13 23:58:40.726401 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. May 13 23:58:40.773043 systemd[1]: Finished ensure-sysext.service. May 13 23:58:40.779856 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. May 13 23:58:40.780111 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. May 13 23:58:40.788222 systemd[1]: modprobe@loop.service: Deactivated successfully. May 13 23:58:40.788502 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. May 13 23:58:40.830808 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. May 13 23:58:40.854148 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. May 13 23:58:40.863271 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Virtual_Disk OEM. May 13 23:58:40.874965 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... May 13 23:58:40.879688 augenrules[1642]: No rules May 13 23:58:40.879712 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). May 13 23:58:40.879905 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. May 13 23:58:40.882143 systemd[1]: Starting systemd-userdbd.service - User Database Manager... May 13 23:58:40.890391 systemd[1]: audit-rules.service: Deactivated successfully. May 13 23:58:40.890675 systemd[1]: Finished audit-rules.service - Load Audit Rules. May 13 23:58:40.894105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. May 13 23:58:40.894348 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:40.899863 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. May 13 23:58:40.901895 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. May 13 23:58:40.916565 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... May 13 23:58:40.916705 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). May 13 23:58:40.920881 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. May 13 23:58:40.924683 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... May 13 23:58:40.933757 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. May 13 23:58:40.964718 lvm[1654]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:40.973308 systemd[1]: Started systemd-userdbd.service - User Database Manager. May 13 23:58:41.004961 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. May 13 23:58:41.014368 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. May 13 23:58:41.021591 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... May 13 23:58:41.027386 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. May 13 23:58:41.048839 lvm[1668]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. May 13 23:58:41.083969 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. May 13 23:58:41.110225 systemd-networkd[1610]: lo: Link UP May 13 23:58:41.110476 systemd-networkd[1610]: lo: Gained carrier May 13 23:58:41.114662 systemd-networkd[1610]: Enumeration completed May 13 23:58:41.115100 systemd-networkd[1610]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:41.115106 systemd-networkd[1610]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:41.115513 systemd[1]: Started systemd-networkd.service - Network Configuration. May 13 23:58:41.122583 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... May 13 23:58:41.122892 systemd-resolved[1611]: Positive Trust Anchors: May 13 23:58:41.123109 systemd-resolved[1611]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d May 13 23:58:41.123183 systemd-resolved[1611]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test May 13 23:58:41.128313 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... May 13 23:58:41.136157 systemd-resolved[1611]: Using system hostname 'ci-4284.0.0-n-b62cb48025'. May 13 23:58:41.182430 kernel: mlx5_core eceb:00:02.0 enP60651s1: Link up May 13 23:58:41.203621 kernel: hv_netvsc 7ced8d40-4192-7ced-8d40-41927ced8d40 eth0: Data path switched to VF: enP60651s1 May 13 23:58:41.205198 systemd-networkd[1610]: enP60651s1: Link UP May 13 23:58:41.205356 systemd-networkd[1610]: eth0: Link UP May 13 23:58:41.205362 systemd-networkd[1610]: eth0: Gained carrier May 13 23:58:41.205388 systemd-networkd[1610]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:41.206285 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. May 13 23:58:41.210020 systemd[1]: Started systemd-resolved.service - Network Name Resolution. May 13 23:58:41.213759 systemd-networkd[1610]: enP60651s1: Gained carrier May 13 23:58:41.214223 systemd[1]: Reached target network.target - Network. May 13 23:58:41.216478 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. May 13 23:58:41.247468 systemd-networkd[1610]: eth0: DHCPv4 address 10.200.8.49/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:58:42.359586 systemd-networkd[1610]: eth0: Gained IPv6LL May 13 23:58:42.362590 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. May 13 23:58:42.366473 systemd[1]: Reached target network-online.target - Network is Online. May 13 23:58:42.495488 ldconfig[1254]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. May 13 23:58:42.505968 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. May 13 23:58:42.510909 systemd[1]: Starting systemd-update-done.service - Update is Completed... May 13 23:58:42.532104 systemd[1]: Finished systemd-update-done.service - Update is Completed. May 13 23:58:42.535268 systemd[1]: Reached target sysinit.target - System Initialization. May 13 23:58:42.538381 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. May 13 23:58:42.541491 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. May 13 23:58:42.545200 systemd[1]: Started logrotate.timer - Daily rotation of log files. May 13 23:58:42.548202 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. May 13 23:58:42.551323 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. May 13 23:58:42.555523 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). May 13 23:58:42.555569 systemd[1]: Reached target paths.target - Path Units. May 13 23:58:42.559026 systemd[1]: Reached target timers.target - Timer Units. May 13 23:58:42.572188 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. May 13 23:58:42.576550 systemd[1]: Starting docker.socket - Docker Socket for the API... May 13 23:58:42.582318 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). May 13 23:58:42.585869 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). May 13 23:58:42.589155 systemd[1]: Reached target ssh-access.target - SSH Access Available. May 13 23:58:42.599206 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. May 13 23:58:42.602617 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. May 13 23:58:42.606259 systemd[1]: Listening on docker.socket - Docker Socket for the API. May 13 23:58:42.609016 systemd[1]: Reached target sockets.target - Socket Units. May 13 23:58:42.611480 systemd[1]: Reached target basic.target - Basic System. May 13 23:58:42.613949 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:42.613988 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. May 13 23:58:42.616502 systemd[1]: Starting chronyd.service - NTP client/server... May 13 23:58:42.621519 systemd[1]: Starting containerd.service - containerd container runtime... May 13 23:58:42.627646 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... May 13 23:58:42.634628 systemd[1]: Starting dbus.service - D-Bus System Message Bus... May 13 23:58:42.638572 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... May 13 23:58:42.643638 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... May 13 23:58:42.645953 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). May 13 23:58:42.646012 systemd[1]: hv_fcopy_daemon.service - Hyper-V FCOPY daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_fcopy). May 13 23:58:42.652632 systemd[1]: Started hv_kvp_daemon.service - Hyper-V KVP daemon. May 13 23:58:42.655638 systemd[1]: hv_vss_daemon.service - Hyper-V VSS daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/vmbus/hv_vss). May 13 23:58:42.658501 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:42.663630 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... May 13 23:58:42.672115 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... May 13 23:58:42.677585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... May 13 23:58:42.684175 KVP[1687]: KVP starting; pid is:1687 May 13 23:58:42.686099 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... May 13 23:58:42.694626 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... May 13 23:58:42.701928 jq[1685]: false May 13 23:58:42.705346 KVP[1687]: KVP LIC Version: 3.1 May 13 23:58:42.705462 kernel: hv_utils: KVP IC version 4.0 May 13 23:58:42.708583 systemd[1]: Starting systemd-logind.service - User Login Management... May 13 23:58:42.713267 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). May 13 23:58:42.713913 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. May 13 23:58:42.715805 systemd[1]: Starting update-engine.service - Update Engine... May 13 23:58:42.723477 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... May 13 23:58:42.735978 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. May 13 23:58:42.736708 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. May 13 23:58:42.750199 systemd-networkd[1610]: enP60651s1: Gained IPv6LL May 13 23:58:42.758320 systemd[1]: motdgen.service: Deactivated successfully. May 13 23:58:42.760461 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. May 13 23:58:42.772772 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. May 13 23:58:42.773791 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. May 13 23:58:42.799239 (chronyd)[1681]: chronyd.service: Referenced but unset environment variable evaluates to an empty string: OPTIONS May 13 23:58:42.810535 extend-filesystems[1686]: Found loop4 May 13 23:58:42.810535 extend-filesystems[1686]: Found loop5 May 13 23:58:42.810535 extend-filesystems[1686]: Found loop6 May 13 23:58:42.810535 extend-filesystems[1686]: Found loop7 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda May 13 23:58:42.810535 extend-filesystems[1686]: Found sda1 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda2 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda3 May 13 23:58:42.810535 extend-filesystems[1686]: Found usr May 13 23:58:42.810535 extend-filesystems[1686]: Found sda4 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda6 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda7 May 13 23:58:42.810535 extend-filesystems[1686]: Found sda9 May 13 23:58:42.810535 extend-filesystems[1686]: Checking size of /dev/sda9 May 13 23:58:42.885194 extend-filesystems[1686]: Old size kept for /dev/sda9 May 13 23:58:42.885194 extend-filesystems[1686]: Found sr0 May 13 23:58:42.895814 update_engine[1700]: I20250513 23:58:42.842432 1700 main.cc:92] Flatcar Update Engine starting May 13 23:58:42.846980 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. May 13 23:58:42.829732 chronyd[1728]: chronyd version 4.6.1 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +ASYNCDNS +NTS +SECHASH +IPV6 -DEBUG) May 13 23:58:42.905630 jq[1702]: true May 13 23:58:42.858783 systemd[1]: Started chronyd.service - NTP client/server. May 13 23:58:42.852514 chronyd[1728]: Timezone right/UTC failed leap second check, ignoring May 13 23:58:42.879821 (ntainerd)[1723]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR May 13 23:58:42.852702 chronyd[1728]: Loaded seccomp filter (level 2) May 13 23:58:42.901020 systemd[1]: extend-filesystems.service: Deactivated successfully. May 13 23:58:42.901339 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. May 13 23:58:42.916533 jq[1729]: true May 13 23:58:42.937232 systemd-logind[1697]: New seat seat0. May 13 23:58:42.942220 tar[1710]: linux-amd64/helm May 13 23:58:42.943929 systemd-logind[1697]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) May 13 23:58:42.944886 systemd[1]: Started systemd-logind.service - User Login Management. May 13 23:58:42.949218 dbus-daemon[1684]: [system] SELinux support is enabled May 13 23:58:42.949438 systemd[1]: Started dbus.service - D-Bus System Message Bus. May 13 23:58:42.958970 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). May 13 23:58:42.959003 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. May 13 23:58:42.963005 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). May 13 23:58:42.963024 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. May 13 23:58:42.993675 systemd[1]: Started update-engine.service - Update Engine. May 13 23:58:42.997021 update_engine[1700]: I20250513 23:58:42.996827 1700 update_check_scheduler.cc:74] Next update check in 9m33s May 13 23:58:43.006759 systemd[1]: Started locksmithd.service - Cluster reboot manager. May 13 23:58:43.055028 sshd_keygen[1721]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 May 13 23:58:43.088876 bash[1759]: Updated "/home/core/.ssh/authorized_keys" May 13 23:58:43.091058 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. May 13 23:58:43.098275 systemd[1]: sshkeys.service was skipped because no trigger condition checks were met. May 13 23:58:43.110087 coreos-metadata[1683]: May 13 23:58:43.109 INFO Fetching http://168.63.129.16/?comp=versions: Attempt #1 May 13 23:58:43.115329 coreos-metadata[1683]: May 13 23:58:43.115 INFO Fetch successful May 13 23:58:43.115329 coreos-metadata[1683]: May 13 23:58:43.115 INFO Fetching http://168.63.129.16/machine/?comp=goalstate: Attempt #1 May 13 23:58:43.115464 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. May 13 23:58:43.119820 coreos-metadata[1683]: May 13 23:58:43.119 INFO Fetch successful May 13 23:58:43.119820 coreos-metadata[1683]: May 13 23:58:43.119 INFO Fetching http://168.63.129.16/machine/8f56f320-8596-4f42-8a63-7728253b5eae/2c50dc73%2D2aac%2D4c04%2D9af8%2Dd51c8968cb3f.%5Fci%2D4284.0.0%2Dn%2Db62cb48025?comp=config&type=sharedConfig&incarnation=1: Attempt #1 May 13 23:58:43.126574 coreos-metadata[1683]: May 13 23:58:43.126 INFO Fetch successful May 13 23:58:43.126574 coreos-metadata[1683]: May 13 23:58:43.126 INFO Fetching http://169.254.169.254/metadata/instance/compute/vmSize?api-version=2017-08-01&format=text: Attempt #1 May 13 23:58:43.128883 systemd[1]: Starting issuegen.service - Generate /run/issue... May 13 23:58:43.134699 systemd[1]: Starting waagent.service - Microsoft Azure Linux Agent... May 13 23:58:43.146433 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (1457) May 13 23:58:43.152525 coreos-metadata[1683]: May 13 23:58:43.146 INFO Fetch successful May 13 23:58:43.216901 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. May 13 23:58:43.224274 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. May 13 23:58:43.237807 systemd[1]: issuegen.service: Deactivated successfully. May 13 23:58:43.238039 systemd[1]: Finished issuegen.service - Generate /run/issue. May 13 23:58:43.258435 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... May 13 23:58:43.281566 systemd[1]: Started waagent.service - Microsoft Azure Linux Agent. May 13 23:58:43.346065 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. May 13 23:58:43.366070 systemd[1]: Started getty@tty1.service - Getty on tty1. May 13 23:58:43.374704 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. May 13 23:58:43.385018 systemd[1]: Reached target getty.target - Login Prompts. May 13 23:58:43.402176 locksmithd[1751]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" May 13 23:58:43.824686 tar[1710]: linux-amd64/LICENSE May 13 23:58:43.824847 tar[1710]: linux-amd64/README.md May 13 23:58:43.841350 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. May 13 23:58:44.129082 containerd[1723]: time="2025-05-13T23:58:44Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 May 13 23:58:44.130179 containerd[1723]: time="2025-05-13T23:58:44.129772700Z" level=info msg="starting containerd" revision=88aa2f531d6c2922003cc7929e51daf1c14caa0a version=v2.0.1 May 13 23:58:44.139348 containerd[1723]: time="2025-05-13T23:58:44.139306500Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7µs" May 13 23:58:44.139348 containerd[1723]: time="2025-05-13T23:58:44.139336300Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 May 13 23:58:44.139499 containerd[1723]: time="2025-05-13T23:58:44.139359500Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 May 13 23:58:44.139568 containerd[1723]: time="2025-05-13T23:58:44.139536300Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 May 13 23:58:44.139608 containerd[1723]: time="2025-05-13T23:58:44.139564300Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 May 13 23:58:44.139608 containerd[1723]: time="2025-05-13T23:58:44.139599700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:44.139702 containerd[1723]: time="2025-05-13T23:58:44.139673500Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 May 13 23:58:44.139702 containerd[1723]: time="2025-05-13T23:58:44.139695200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:44.139958 containerd[1723]: time="2025-05-13T23:58:44.139927600Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 May 13 23:58:44.139958 containerd[1723]: time="2025-05-13T23:58:44.139948700Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:44.140039 containerd[1723]: time="2025-05-13T23:58:44.139963000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 May 13 23:58:44.140039 containerd[1723]: time="2025-05-13T23:58:44.139976000Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 May 13 23:58:44.140121 containerd[1723]: time="2025-05-13T23:58:44.140100200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 May 13 23:58:44.140334 containerd[1723]: time="2025-05-13T23:58:44.140302500Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:44.140385 containerd[1723]: time="2025-05-13T23:58:44.140355800Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 May 13 23:58:44.140385 containerd[1723]: time="2025-05-13T23:58:44.140373000Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 May 13 23:58:44.140519 containerd[1723]: time="2025-05-13T23:58:44.140428300Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 May 13 23:58:44.140749 containerd[1723]: time="2025-05-13T23:58:44.140715100Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 May 13 23:58:44.140817 containerd[1723]: time="2025-05-13T23:58:44.140800600Z" level=info msg="metadata content store policy set" policy=shared May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172069500Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172151700Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172173200Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172191500Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172209100Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172224600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172261500Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172281300Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172298900Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172315900Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172330700Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172348100Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172522600Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 May 13 23:58:44.172803 containerd[1723]: time="2025-05-13T23:58:44.172552800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172575400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172599700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172617000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172632400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172648500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172663400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172680500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172696800Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172711800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172841100Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172862200Z" level=info msg="Start snapshots syncer" May 13 23:58:44.173346 containerd[1723]: time="2025-05-13T23:58:44.172889400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 May 13 23:58:44.173746 containerd[1723]: time="2025-05-13T23:58:44.173225700Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" May 13 23:58:44.173746 containerd[1723]: time="2025-05-13T23:58:44.173294900Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173376000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173520900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173552800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173569500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173587700Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173608500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173624200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173639800Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173672100Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173698000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173714200Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173753500Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173776800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 May 13 23:58:44.173924 containerd[1723]: time="2025-05-13T23:58:44.173788800Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173802500Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173816100Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173830700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173845200Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173867000Z" level=info msg="runtime interface created" May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173874400Z" level=info msg="created NRI interface" May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173893600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173912500Z" level=info msg="Connect containerd service" May 13 23:58:44.174353 containerd[1723]: time="2025-05-13T23:58:44.173950100Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" May 13 23:58:44.175653 containerd[1723]: time="2025-05-13T23:58:44.174985000Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" May 13 23:58:44.420879 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:44.435927 (kubelet)[1859]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:45.033600 kubelet[1859]: E0513 23:58:45.033519 1859 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:45.036356 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:45.036573 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:45.036987 systemd[1]: kubelet.service: Consumed 942ms CPU time, 245.2M memory peak. May 13 23:58:45.572248 containerd[1723]: time="2025-05-13T23:58:45.572052800Z" level=info msg="Start subscribing containerd event" May 13 23:58:45.572248 containerd[1723]: time="2025-05-13T23:58:45.572111400Z" level=info msg="Start recovering state" May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572269800Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572334200Z" level=info msg="Start event monitor" May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572379100Z" level=info msg="Start cni network conf syncer for default" May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572417100Z" level=info msg="Start streaming server" May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572445200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572455300Z" level=info msg="runtime interface starting up..." May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572465000Z" level=info msg="starting plugins..." May 13 23:58:45.572716 containerd[1723]: time="2025-05-13T23:58:45.572488300Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" May 13 23:58:45.572983 containerd[1723]: time="2025-05-13T23:58:45.572759900Z" level=info msg=serving... address=/run/containerd/containerd.sock May 13 23:58:45.572983 containerd[1723]: time="2025-05-13T23:58:45.572841100Z" level=info msg="containerd successfully booted in 1.445073s" May 13 23:58:45.573076 systemd[1]: Started containerd.service - containerd container runtime. May 13 23:58:45.576181 systemd[1]: Reached target multi-user.target - Multi-User System. May 13 23:58:45.579398 systemd[1]: Startup finished in 594ms (firmware) + 56.111s (loader) + 927ms (kernel) + 25.123s (initrd) + 15.406s (userspace) = 1min 38.163s. May 13 23:58:46.009925 login[1842]: pam_lastlog(login:session): file /var/log/lastlog is locked/write, retrying May 13 23:58:46.010874 login[1841]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:46.030108 systemd[1]: Created slice user-500.slice - User Slice of UID 500. May 13 23:58:46.030158 systemd-logind[1697]: New session 1 of user core. May 13 23:58:46.035685 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... May 13 23:58:46.063583 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. May 13 23:58:46.067518 systemd[1]: Starting user@500.service - User Manager for UID 500... May 13 23:58:46.084275 (systemd)[1886]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) May 13 23:58:46.087736 systemd-logind[1697]: New session c1 of user core. May 13 23:58:46.147721 waagent[1825]: 2025-05-13T23:58:46.147639Z INFO Daemon Daemon Azure Linux Agent Version: 2.12.0.4 May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.148063Z INFO Daemon Daemon OS: flatcar 4284.0.0 May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.148902Z INFO Daemon Daemon Python: 3.11.11 May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.149910Z INFO Daemon Daemon Run daemon May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.150630Z INFO Daemon Daemon No RDMA handler exists for distro='Flatcar Container Linux by Kinvolk' version='4284.0.0' May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.151368Z INFO Daemon Daemon Using waagent for provisioning May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.151913Z INFO Daemon Daemon Activate resource disk May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.152605Z INFO Daemon Daemon Searching gen1 prefix 00000000-0001 or gen2 f8b3781a-1e82-4818-a1c3-63d806ec15bb May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.157001Z INFO Daemon Daemon Found device: None May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.157911Z ERROR Daemon Daemon Failed to mount resource disk [ResourceDiskError] unable to detect disk topology May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.158756Z ERROR Daemon Daemon Event: name=WALinuxAgent, op=ActivateResourceDisk, message=[ResourceDiskError] unable to detect disk topology, duration=0 May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.160019Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:58:46.181434 waagent[1825]: 2025-05-13T23:58:46.160795Z INFO Daemon Daemon Running default provisioning handler May 13 23:58:46.185014 waagent[1825]: 2025-05-13T23:58:46.184937Z INFO Daemon Daemon Unable to get cloud-init enabled status from systemctl: Command '['systemctl', 'is-enabled', 'cloud-init-local.service']' returned non-zero exit status 4. May 13 23:58:46.191174 waagent[1825]: 2025-05-13T23:58:46.191121Z INFO Daemon Daemon Unable to get cloud-init enabled status from service: [Errno 2] No such file or directory: 'service' May 13 23:58:46.199174 waagent[1825]: 2025-05-13T23:58:46.191328Z INFO Daemon Daemon cloud-init is enabled: False May 13 23:58:46.199174 waagent[1825]: 2025-05-13T23:58:46.192141Z INFO Daemon Daemon Copying ovf-env.xml May 13 23:58:46.371205 waagent[1825]: 2025-05-13T23:58:46.368625Z INFO Daemon Daemon Successfully mounted dvd May 13 23:58:46.389381 systemd[1]: mnt-cdrom-secure.mount: Deactivated successfully. May 13 23:58:46.394008 waagent[1825]: 2025-05-13T23:58:46.390890Z INFO Daemon Daemon Detect protocol endpoint May 13 23:58:46.394335 waagent[1825]: 2025-05-13T23:58:46.394292Z INFO Daemon Daemon Clean protocol and wireserver endpoint May 13 23:58:46.398010 waagent[1825]: 2025-05-13T23:58:46.397969Z INFO Daemon Daemon WireServer endpoint is not found. Rerun dhcp handler May 13 23:58:46.401806 waagent[1825]: 2025-05-13T23:58:46.401768Z INFO Daemon Daemon Test for route to 168.63.129.16 May 13 23:58:46.404444 waagent[1825]: 2025-05-13T23:58:46.404396Z INFO Daemon Daemon Route to 168.63.129.16 exists May 13 23:58:46.406893 waagent[1825]: 2025-05-13T23:58:46.406857Z INFO Daemon Daemon Wire server endpoint:168.63.129.16 May 13 23:58:46.424728 waagent[1825]: 2025-05-13T23:58:46.424684Z INFO Daemon Daemon Fabric preferred wire protocol version:2015-04-05 May 13 23:58:46.433599 waagent[1825]: 2025-05-13T23:58:46.428153Z INFO Daemon Daemon Wire protocol version:2012-11-30 May 13 23:58:46.433599 waagent[1825]: 2025-05-13T23:58:46.430882Z INFO Daemon Daemon Server preferred version:2015-04-05 May 13 23:58:46.482320 systemd[1886]: Queued start job for default target default.target. May 13 23:58:46.491510 systemd[1886]: Created slice app.slice - User Application Slice. May 13 23:58:46.491543 systemd[1886]: Reached target paths.target - Paths. May 13 23:58:46.491596 systemd[1886]: Reached target timers.target - Timers. May 13 23:58:46.492878 systemd[1886]: Starting dbus.socket - D-Bus User Message Bus Socket... May 13 23:58:46.508957 systemd[1886]: Listening on dbus.socket - D-Bus User Message Bus Socket. May 13 23:58:46.509089 systemd[1886]: Reached target sockets.target - Sockets. May 13 23:58:46.509322 systemd[1886]: Reached target basic.target - Basic System. May 13 23:58:46.509381 systemd[1886]: Reached target default.target - Main User Target. May 13 23:58:46.509436 systemd[1886]: Startup finished in 411ms. May 13 23:58:46.509545 systemd[1]: Started user@500.service - User Manager for UID 500. May 13 23:58:46.518971 systemd[1]: Started session-1.scope - Session 1 of User core. May 13 23:58:46.625139 waagent[1825]: 2025-05-13T23:58:46.625008Z INFO Daemon Daemon Initializing goal state during protocol detection May 13 23:58:46.628341 waagent[1825]: 2025-05-13T23:58:46.628281Z INFO Daemon Daemon Forcing an update of the goal state. May 13 23:58:46.634482 waagent[1825]: 2025-05-13T23:58:46.634431Z INFO Daemon Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:58:46.665068 waagent[1825]: 2025-05-13T23:58:46.665013Z INFO Daemon Daemon HostGAPlugin version: 1.0.8.164 May 13 23:58:46.680005 waagent[1825]: 2025-05-13T23:58:46.665762Z INFO Daemon May 13 23:58:46.680005 waagent[1825]: 2025-05-13T23:58:46.666779Z INFO Daemon Fetched new vmSettings [HostGAPlugin correlation ID: 8b256525-1ad4-4ef0-ae56-b345c4367077 eTag: 2004972469726113866 source: Fabric] May 13 23:58:46.680005 waagent[1825]: 2025-05-13T23:58:46.667863Z INFO Daemon The vmSettings originated via Fabric; will ignore them. May 13 23:58:46.680005 waagent[1825]: 2025-05-13T23:58:46.668657Z INFO Daemon May 13 23:58:46.680005 waagent[1825]: 2025-05-13T23:58:46.669296Z INFO Daemon Fetching full goal state from the WireServer [incarnation 1] May 13 23:58:46.682983 waagent[1825]: 2025-05-13T23:58:46.682939Z INFO Daemon Daemon Downloading artifacts profile blob May 13 23:58:46.753034 waagent[1825]: 2025-05-13T23:58:46.752958Z INFO Daemon Downloaded certificate {'thumbprint': 'D2D171463C765F5242AADDF12069BCFD48CCC7EE', 'hasPrivateKey': False} May 13 23:58:46.757867 waagent[1825]: 2025-05-13T23:58:46.757815Z INFO Daemon Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:58:46.762621 waagent[1825]: 2025-05-13T23:58:46.762574Z INFO Daemon Fetch goal state completed May 13 23:58:46.771419 waagent[1825]: 2025-05-13T23:58:46.771370Z INFO Daemon Daemon Starting provisioning May 13 23:58:46.778515 waagent[1825]: 2025-05-13T23:58:46.771642Z INFO Daemon Daemon Handle ovf-env.xml. May 13 23:58:46.778515 waagent[1825]: 2025-05-13T23:58:46.772283Z INFO Daemon Daemon Set hostname [ci-4284.0.0-n-b62cb48025] May 13 23:58:46.803471 waagent[1825]: 2025-05-13T23:58:46.803362Z INFO Daemon Daemon Publish hostname [ci-4284.0.0-n-b62cb48025] May 13 23:58:46.810939 waagent[1825]: 2025-05-13T23:58:46.803938Z INFO Daemon Daemon Examine /proc/net/route for primary interface May 13 23:58:46.810939 waagent[1825]: 2025-05-13T23:58:46.804923Z INFO Daemon Daemon Primary interface is [eth0] May 13 23:58:46.814789 systemd-networkd[1610]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. May 13 23:58:46.814798 systemd-networkd[1610]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. May 13 23:58:46.814844 systemd-networkd[1610]: eth0: DHCP lease lost May 13 23:58:46.816050 waagent[1825]: 2025-05-13T23:58:46.815968Z INFO Daemon Daemon Create user account if not exists May 13 23:58:46.831201 waagent[1825]: 2025-05-13T23:58:46.816280Z INFO Daemon Daemon User core already exists, skip useradd May 13 23:58:46.831201 waagent[1825]: 2025-05-13T23:58:46.817207Z INFO Daemon Daemon Configure sudoer May 13 23:58:46.831201 waagent[1825]: 2025-05-13T23:58:46.818262Z INFO Daemon Daemon Configure sshd May 13 23:58:46.831201 waagent[1825]: 2025-05-13T23:58:46.819041Z INFO Daemon Daemon Added a configuration snippet disabling SSH password-based authentication methods. It also configures SSH client probing to keep connections alive. May 13 23:58:46.831201 waagent[1825]: 2025-05-13T23:58:46.820106Z INFO Daemon Daemon Deploy ssh public key. May 13 23:58:46.866471 systemd-networkd[1610]: eth0: DHCPv4 address 10.200.8.49/24, gateway 10.200.8.1 acquired from 168.63.129.16 May 13 23:58:47.012133 login[1842]: pam_unix(login:session): session opened for user core(uid=500) by LOGIN(uid=0) May 13 23:58:47.016551 systemd-logind[1697]: New session 2 of user core. May 13 23:58:47.026573 systemd[1]: Started session-2.scope - Session 2 of User core. May 13 23:58:48.004724 waagent[1825]: 2025-05-13T23:58:48.004669Z INFO Daemon Daemon Provisioning complete May 13 23:58:48.015394 waagent[1825]: 2025-05-13T23:58:48.015340Z INFO Daemon Daemon RDMA capabilities are not enabled, skipping May 13 23:58:48.021897 waagent[1825]: 2025-05-13T23:58:48.015637Z INFO Daemon Daemon End of log to /dev/console. The agent will now check for updates and then will process extensions. May 13 23:58:48.021897 waagent[1825]: 2025-05-13T23:58:48.016484Z INFO Daemon Daemon Installed Agent WALinuxAgent-2.12.0.4 is the most current agent May 13 23:58:48.144859 waagent[1938]: 2025-05-13T23:58:48.144766Z INFO ExtHandler ExtHandler Azure Linux Agent (Goal State Agent version 2.12.0.4) May 13 23:58:48.145242 waagent[1938]: 2025-05-13T23:58:48.144922Z INFO ExtHandler ExtHandler OS: flatcar 4284.0.0 May 13 23:58:48.145242 waagent[1938]: 2025-05-13T23:58:48.144991Z INFO ExtHandler ExtHandler Python: 3.11.11 May 13 23:58:48.145242 waagent[1938]: 2025-05-13T23:58:48.145064Z INFO ExtHandler ExtHandler CPU Arch: x86_64 May 13 23:58:52.274832 waagent[1938]: 2025-05-13T23:58:52.274743Z INFO ExtHandler ExtHandler Distro: flatcar-4284.0.0; OSUtil: FlatcarUtil; AgentService: waagent; Python: 3.11.11; Arch: x86_64; systemd: True; LISDrivers: Absent; logrotate: logrotate 3.20.1; May 13 23:58:52.275429 waagent[1938]: 2025-05-13T23:58:52.275074Z INFO ExtHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:52.275429 waagent[1938]: 2025-05-13T23:58:52.275200Z INFO ExtHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:52.282172 waagent[1938]: 2025-05-13T23:58:52.282112Z INFO ExtHandler Fetched a new incarnation for the WireServer goal state [incarnation 1] May 13 23:58:52.290660 waagent[1938]: 2025-05-13T23:58:52.290617Z INFO ExtHandler ExtHandler HostGAPlugin version: 1.0.8.164 May 13 23:58:52.291107 waagent[1938]: 2025-05-13T23:58:52.291058Z INFO ExtHandler May 13 23:58:52.291207 waagent[1938]: 2025-05-13T23:58:52.291143Z INFO ExtHandler Fetched new vmSettings [HostGAPlugin correlation ID: bb0ccab6-a009-4013-be63-f68702cdb567 eTag: 2004972469726113866 source: Fabric] May 13 23:58:52.291504 waagent[1938]: 2025-05-13T23:58:52.291460Z INFO ExtHandler The vmSettings originated via Fabric; will ignore them. May 13 23:58:52.292001 waagent[1938]: 2025-05-13T23:58:52.291954Z INFO ExtHandler May 13 23:58:52.292063 waagent[1938]: 2025-05-13T23:58:52.292029Z INFO ExtHandler Fetching full goal state from the WireServer [incarnation 1] May 13 23:58:52.295316 waagent[1938]: 2025-05-13T23:58:52.295279Z INFO ExtHandler ExtHandler Downloading artifacts profile blob May 13 23:58:52.451718 waagent[1938]: 2025-05-13T23:58:52.451643Z INFO ExtHandler Downloaded certificate {'thumbprint': 'D2D171463C765F5242AADDF12069BCFD48CCC7EE', 'hasPrivateKey': False} May 13 23:58:52.452118 waagent[1938]: 2025-05-13T23:58:52.452071Z INFO ExtHandler Downloaded certificate {'thumbprint': 'A37CD1314A4C07D94C62FB3A67A3C34C07AB9C09', 'hasPrivateKey': True} May 13 23:58:52.452551 waagent[1938]: 2025-05-13T23:58:52.452510Z INFO ExtHandler Fetch goal state completed May 13 23:58:52.468066 waagent[1938]: 2025-05-13T23:58:52.468014Z INFO ExtHandler ExtHandler OpenSSL version: OpenSSL 3.3.3 11 Feb 2025 (Library: OpenSSL 3.3.3 11 Feb 2025) May 13 23:58:52.472743 waagent[1938]: 2025-05-13T23:58:52.472694Z INFO ExtHandler ExtHandler WALinuxAgent-2.12.0.4 running as process 1938 May 13 23:58:52.472878 waagent[1938]: 2025-05-13T23:58:52.472843Z INFO ExtHandler ExtHandler ******** AutoUpdate.Enabled is set to False, not processing the operation ******** May 13 23:58:52.473191 waagent[1938]: 2025-05-13T23:58:52.473151Z INFO ExtHandler ExtHandler ******** AutoUpdate.UpdateToLatestVersion is set to False, not processing the operation ******** May 13 23:58:52.474596 waagent[1938]: 2025-05-13T23:58:52.474554Z INFO ExtHandler ExtHandler [CGI] Cgroup monitoring is not supported on ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] May 13 23:58:52.475020 waagent[1938]: 2025-05-13T23:58:52.474979Z INFO ExtHandler ExtHandler [CGI] Agent will reset the quotas in case distro: ['flatcar', '4284.0.0', '', 'Flatcar Container Linux by Kinvolk'] went from supported to unsupported May 13 23:58:52.475173 waagent[1938]: 2025-05-13T23:58:52.475139Z INFO ExtHandler ExtHandler [CGI] Agent cgroups enabled: False May 13 23:58:52.475751 waagent[1938]: 2025-05-13T23:58:52.475712Z INFO ExtHandler ExtHandler Starting setup for Persistent firewall rules May 13 23:58:52.721562 waagent[1938]: 2025-05-13T23:58:52.721451Z INFO ExtHandler ExtHandler Firewalld service not running/unavailable, trying to set up waagent-network-setup.service May 13 23:58:52.721715 waagent[1938]: 2025-05-13T23:58:52.721672Z INFO ExtHandler ExtHandler Successfully updated the Binary file /var/lib/waagent/waagent-network-setup.py for firewall setup May 13 23:58:52.728500 waagent[1938]: 2025-05-13T23:58:52.728236Z INFO ExtHandler ExtHandler Service: waagent-network-setup.service not enabled. Adding it now May 13 23:58:52.735113 systemd[1]: Reload requested from client PID 1955 ('systemctl') (unit waagent.service)... May 13 23:58:52.735131 systemd[1]: Reloading... May 13 23:58:52.829454 zram_generator::config[1990]: No configuration found. May 13 23:58:52.961359 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:58:53.072877 systemd[1]: Reloading finished in 337 ms. May 13 23:58:53.092433 waagent[1938]: 2025-05-13T23:58:53.090447Z INFO ExtHandler ExtHandler Successfully added and enabled the waagent-network-setup.service May 13 23:58:53.092433 waagent[1938]: 2025-05-13T23:58:53.090619Z INFO ExtHandler ExtHandler Persistent firewall rules setup successfully May 13 23:58:53.452769 waagent[1938]: 2025-05-13T23:58:53.452641Z INFO ExtHandler ExtHandler DROP rule is not available which implies no firewall rules are set yet. Environment thread will set it up. May 13 23:58:53.453143 waagent[1938]: 2025-05-13T23:58:53.453058Z INFO ExtHandler ExtHandler Checking if log collection is allowed at this time [False]. All three conditions must be met: 1. configuration enabled [True], 2. cgroups v1 enabled [False] OR cgroups v2 is in use and v2 resource limiting configuration enabled [False], 3. python supported: [True] May 13 23:58:53.453841 waagent[1938]: 2025-05-13T23:58:53.453785Z INFO ExtHandler ExtHandler Starting env monitor service. May 13 23:58:53.454236 waagent[1938]: 2025-05-13T23:58:53.454188Z INFO ExtHandler ExtHandler Start SendTelemetryHandler service. May 13 23:58:53.454352 waagent[1938]: 2025-05-13T23:58:53.454312Z INFO EnvHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:53.454469 waagent[1938]: 2025-05-13T23:58:53.454388Z INFO MonitorHandler ExtHandler WireServer endpoint 168.63.129.16 read from file May 13 23:58:53.454597 waagent[1938]: 2025-05-13T23:58:53.454544Z INFO MonitorHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:53.455257 waagent[1938]: 2025-05-13T23:58:53.455210Z INFO SendTelemetryHandler ExtHandler Successfully started the SendTelemetryHandler thread May 13 23:58:53.455502 waagent[1938]: 2025-05-13T23:58:53.455465Z INFO MonitorHandler ExtHandler Monitor.NetworkConfigurationChanges is disabled. May 13 23:58:53.455806 waagent[1938]: 2025-05-13T23:58:53.455765Z INFO EnvHandler ExtHandler Wire server endpoint:168.63.129.16 May 13 23:58:53.456068 waagent[1938]: 2025-05-13T23:58:53.456028Z INFO ExtHandler ExtHandler Start Extension Telemetry service. May 13 23:58:53.456262 waagent[1938]: 2025-05-13T23:58:53.456221Z INFO MonitorHandler ExtHandler Routing table from /proc/net/route: May 13 23:58:53.456262 waagent[1938]: Iface Destination Gateway Flags RefCnt Use Metric Mask MTU Window IRTT May 13 23:58:53.456262 waagent[1938]: eth0 00000000 0108C80A 0003 0 0 1024 00000000 0 0 0 May 13 23:58:53.456262 waagent[1938]: eth0 0008C80A 00000000 0001 0 0 1024 00FFFFFF 0 0 0 May 13 23:58:53.456262 waagent[1938]: eth0 0108C80A 00000000 0005 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:53.456262 waagent[1938]: eth0 10813FA8 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:53.456262 waagent[1938]: eth0 FEA9FEA9 0108C80A 0007 0 0 1024 FFFFFFFF 0 0 0 May 13 23:58:53.456578 waagent[1938]: 2025-05-13T23:58:53.456379Z INFO EnvHandler ExtHandler Configure routes May 13 23:58:53.456732 waagent[1938]: 2025-05-13T23:58:53.456689Z INFO TelemetryEventsCollector ExtHandler Extension Telemetry pipeline enabled: True May 13 23:58:53.456877 waagent[1938]: 2025-05-13T23:58:53.456844Z INFO ExtHandler ExtHandler Goal State Period: 6 sec. This indicates how often the agent checks for new goal states and reports status. May 13 23:58:53.456979 waagent[1938]: 2025-05-13T23:58:53.456940Z INFO TelemetryEventsCollector ExtHandler Successfully started the TelemetryEventsCollector thread May 13 23:58:53.457201 waagent[1938]: 2025-05-13T23:58:53.457158Z INFO EnvHandler ExtHandler Gateway:None May 13 23:58:53.458237 waagent[1938]: 2025-05-13T23:58:53.458197Z INFO EnvHandler ExtHandler Routes:None May 13 23:58:53.464109 waagent[1938]: 2025-05-13T23:58:53.464064Z INFO ExtHandler ExtHandler May 13 23:58:53.464646 waagent[1938]: 2025-05-13T23:58:53.464606Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState started [incarnation_1 channel: WireServer source: Fabric activity: 211de630-3ab6-4ad3-973a-7389e458a7a8 correlation 7b7e2aa5-2c08-4fbd-8676-7e475cd54200 created: 2025-05-13T23:56:56.231575Z] May 13 23:58:53.466744 waagent[1938]: 2025-05-13T23:58:53.466462Z INFO ExtHandler ExtHandler No extension handlers found, not processing anything. May 13 23:58:53.468437 waagent[1938]: 2025-05-13T23:58:53.467709Z INFO ExtHandler ExtHandler ProcessExtensionsGoalState completed [incarnation_1 3 ms] May 13 23:58:53.483038 waagent[1938]: 2025-05-13T23:58:53.482985Z INFO MonitorHandler ExtHandler Network interfaces: May 13 23:58:53.483038 waagent[1938]: Executing ['ip', '-a', '-o', 'link']: May 13 23:58:53.483038 waagent[1938]: 1: lo: mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000\ link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 May 13 23:58:53.483038 waagent[1938]: 2: eth0: mtu 1500 qdisc mq state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:40:41:92 brd ff:ff:ff:ff:ff:ff May 13 23:58:53.483038 waagent[1938]: 3: enP60651s1: mtu 1500 qdisc mq master eth0 state UP mode DEFAULT group default qlen 1000\ link/ether 7c:ed:8d:40:41:92 brd ff:ff:ff:ff:ff:ff\ altname enP60651p0s2 May 13 23:58:53.483038 waagent[1938]: Executing ['ip', '-4', '-a', '-o', 'address']: May 13 23:58:53.483038 waagent[1938]: 1: lo inet 127.0.0.1/8 scope host lo\ valid_lft forever preferred_lft forever May 13 23:58:53.483038 waagent[1938]: 2: eth0 inet 10.200.8.49/24 metric 1024 brd 10.200.8.255 scope global eth0\ valid_lft forever preferred_lft forever May 13 23:58:53.483038 waagent[1938]: Executing ['ip', '-6', '-a', '-o', 'address']: May 13 23:58:53.483038 waagent[1938]: 1: lo inet6 ::1/128 scope host noprefixroute \ valid_lft forever preferred_lft forever May 13 23:58:53.483038 waagent[1938]: 2: eth0 inet6 fe80::7eed:8dff:fe40:4192/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:58:53.483038 waagent[1938]: 3: enP60651s1 inet6 fe80::7eed:8dff:fe40:4192/64 scope link proto kernel_ll \ valid_lft forever preferred_lft forever May 13 23:58:53.499452 waagent[1938]: 2025-05-13T23:58:53.499386Z INFO ExtHandler ExtHandler [HEARTBEAT] Agent WALinuxAgent-2.12.0.4 is running as the goal state agent [DEBUG HeartbeatCounter: 0;HeartbeatId: DA9189D6-0E51-4E0A-85EF-B8F647658233;DroppedPackets: 0;UpdateGSErrors: 0;AutoUpdate: 0;UpdateMode: SelfUpdate;] May 13 23:58:53.787735 waagent[1938]: 2025-05-13T23:58:53.787668Z INFO EnvHandler ExtHandler Created firewall rules for the Azure Fabric: May 13 23:58:53.787735 waagent[1938]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.787735 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.787735 waagent[1938]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.787735 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.787735 waagent[1938]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.787735 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.787735 waagent[1938]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:58:53.787735 waagent[1938]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:58:53.787735 waagent[1938]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:58:53.790968 waagent[1938]: 2025-05-13T23:58:53.790919Z INFO EnvHandler ExtHandler Current Firewall rules: May 13 23:58:53.790968 waagent[1938]: Chain INPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.790968 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.790968 waagent[1938]: Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.790968 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.790968 waagent[1938]: Chain OUTPUT (policy ACCEPT 0 packets, 0 bytes) May 13 23:58:53.790968 waagent[1938]: pkts bytes target prot opt in out source destination May 13 23:58:53.790968 waagent[1938]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 tcp dpt:53 May 13 23:58:53.790968 waagent[1938]: 0 0 ACCEPT tcp -- * * 0.0.0.0/0 168.63.129.16 owner UID match 0 May 13 23:58:53.790968 waagent[1938]: 0 0 DROP tcp -- * * 0.0.0.0/0 168.63.129.16 ctstate INVALID,NEW May 13 23:58:53.791343 waagent[1938]: 2025-05-13T23:58:53.791202Z INFO EnvHandler ExtHandler Set block dev timeout: sda with timeout: 300 May 13 23:58:55.259623 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. May 13 23:58:55.261740 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:58:55.387574 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:58:55.400864 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:58:55.987560 kubelet[2093]: E0513 23:58:55.987502 2093 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:58:55.991517 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:58:55.991711 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:58:55.992115 systemd[1]: kubelet.service: Consumed 155ms CPU time, 96.9M memory peak. May 13 23:59:06.009734 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. May 13 23:59:06.011767 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:06.126038 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:06.135741 (kubelet)[2109]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:06.642537 chronyd[1728]: Selected source PHC0 May 13 23:59:06.776864 kubelet[2109]: E0513 23:59:06.776806 2109 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:06.779286 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:06.779583 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:06.779967 systemd[1]: kubelet.service: Consumed 144ms CPU time, 97.7M memory peak. May 13 23:59:13.374195 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. May 13 23:59:13.375601 systemd[1]: Started sshd@0-10.200.8.49:22-10.200.16.10:44652.service - OpenSSH per-connection server daemon (10.200.16.10:44652). May 13 23:59:14.036298 sshd[2118]: Accepted publickey for core from 10.200.16.10 port 44652 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:14.037907 sshd-session[2118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:14.042502 systemd-logind[1697]: New session 3 of user core. May 13 23:59:14.049567 systemd[1]: Started session-3.scope - Session 3 of User core. May 13 23:59:14.585181 systemd[1]: Started sshd@1-10.200.8.49:22-10.200.16.10:44662.service - OpenSSH per-connection server daemon (10.200.16.10:44662). May 13 23:59:15.215981 sshd[2123]: Accepted publickey for core from 10.200.16.10 port 44662 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:15.217641 sshd-session[2123]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:15.222619 systemd-logind[1697]: New session 4 of user core. May 13 23:59:15.229569 systemd[1]: Started session-4.scope - Session 4 of User core. May 13 23:59:15.659361 sshd[2125]: Connection closed by 10.200.16.10 port 44662 May 13 23:59:15.660276 sshd-session[2123]: pam_unix(sshd:session): session closed for user core May 13 23:59:15.663578 systemd[1]: sshd@1-10.200.8.49:22-10.200.16.10:44662.service: Deactivated successfully. May 13 23:59:15.665638 systemd[1]: session-4.scope: Deactivated successfully. May 13 23:59:15.667105 systemd-logind[1697]: Session 4 logged out. Waiting for processes to exit. May 13 23:59:15.668020 systemd-logind[1697]: Removed session 4. May 13 23:59:15.771050 systemd[1]: Started sshd@2-10.200.8.49:22-10.200.16.10:44678.service - OpenSSH per-connection server daemon (10.200.16.10:44678). May 13 23:59:16.403336 sshd[2131]: Accepted publickey for core from 10.200.16.10 port 44678 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:16.405001 sshd-session[2131]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:16.410870 systemd-logind[1697]: New session 5 of user core. May 13 23:59:16.419567 systemd[1]: Started session-5.scope - Session 5 of User core. May 13 23:59:16.842374 sshd[2133]: Connection closed by 10.200.16.10 port 44678 May 13 23:59:16.843271 sshd-session[2131]: pam_unix(sshd:session): session closed for user core May 13 23:59:16.846693 systemd[1]: sshd@2-10.200.8.49:22-10.200.16.10:44678.service: Deactivated successfully. May 13 23:59:16.848905 systemd[1]: session-5.scope: Deactivated successfully. May 13 23:59:16.850053 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. May 13 23:59:16.851501 systemd-logind[1697]: Session 5 logged out. Waiting for processes to exit. May 13 23:59:16.852903 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:16.854046 systemd-logind[1697]: Removed session 5. May 13 23:59:16.963747 systemd[1]: Started sshd@3-10.200.8.49:22-10.200.16.10:44690.service - OpenSSH per-connection server daemon (10.200.16.10:44690). May 13 23:59:16.972567 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:16.981871 (kubelet)[2147]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:17.546741 kubelet[2147]: E0513 23:59:17.546684 2147 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:17.549752 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:17.549956 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:17.550639 systemd[1]: kubelet.service: Consumed 147ms CPU time, 95.4M memory peak. May 13 23:59:17.593976 sshd[2146]: Accepted publickey for core from 10.200.16.10 port 44690 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:17.595535 sshd-session[2146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:17.599952 systemd-logind[1697]: New session 6 of user core. May 13 23:59:17.607554 systemd[1]: Started session-6.scope - Session 6 of User core. May 13 23:59:18.036863 sshd[2157]: Connection closed by 10.200.16.10 port 44690 May 13 23:59:18.037695 sshd-session[2146]: pam_unix(sshd:session): session closed for user core May 13 23:59:18.041032 systemd[1]: sshd@3-10.200.8.49:22-10.200.16.10:44690.service: Deactivated successfully. May 13 23:59:18.043431 systemd[1]: session-6.scope: Deactivated successfully. May 13 23:59:18.045229 systemd-logind[1697]: Session 6 logged out. Waiting for processes to exit. May 13 23:59:18.046173 systemd-logind[1697]: Removed session 6. May 13 23:59:18.148778 systemd[1]: Started sshd@4-10.200.8.49:22-10.200.16.10:44702.service - OpenSSH per-connection server daemon (10.200.16.10:44702). May 13 23:59:18.777026 sshd[2163]: Accepted publickey for core from 10.200.16.10 port 44702 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 13 23:59:18.778709 sshd-session[2163]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 13 23:59:18.783325 systemd-logind[1697]: New session 7 of user core. May 13 23:59:18.794556 systemd[1]: Started session-7.scope - Session 7 of User core. May 13 23:59:20.647658 sudo[2166]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh May 13 23:59:20.648003 sudo[2166]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) May 13 23:59:26.683223 systemd[1]: Starting docker.service - Docker Application Container Engine... May 13 23:59:26.693873 (dockerd)[2183]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU May 13 23:59:27.759360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. May 13 23:59:27.761339 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:28.127843 update_engine[1700]: I20250513 23:59:28.127662 1700 update_attempter.cc:509] Updating boot flags... May 13 23:59:28.231353 kernel: hv_balloon: Max. dynamic memory size: 8192 MB May 13 23:59:29.068429 dockerd[2183]: time="2025-05-13T23:59:29.068274800Z" level=info msg="Starting up" May 13 23:59:29.073309 dockerd[2183]: time="2025-05-13T23:59:29.072825555Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" May 13 23:59:29.205432 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2219) May 13 23:59:29.367491 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2222) May 13 23:59:29.528465 kernel: BTRFS warning: duplicate device /dev/sda3 devid 1 generation 39 scanned by (udev-worker) (2222) May 13 23:59:29.850504 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:29.854643 (kubelet)[2373]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:29.894386 kubelet[2373]: E0513 23:59:29.894333 2373 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:29.896763 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:29.896953 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:29.897366 systemd[1]: kubelet.service: Consumed 154ms CPU time, 95.4M memory peak. May 13 23:59:30.948179 dockerd[2183]: time="2025-05-13T23:59:30.948131744Z" level=info msg="Loading containers: start." May 13 23:59:31.093531 kernel: Initializing XFRM netlink socket May 13 23:59:31.145511 systemd-networkd[1610]: docker0: Link UP May 13 23:59:31.200860 dockerd[2183]: time="2025-05-13T23:59:31.200567958Z" level=info msg="Loading containers: done." May 13 23:59:31.221953 dockerd[2183]: time="2025-05-13T23:59:31.221900872Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 May 13 23:59:31.222140 dockerd[2183]: time="2025-05-13T23:59:31.222004073Z" level=info msg="Docker daemon" commit=c710b88579fcb5e0d53f96dcae976d79323b9166 containerd-snapshotter=false storage-driver=overlay2 version=27.4.1 May 13 23:59:31.222198 dockerd[2183]: time="2025-05-13T23:59:31.222140475Z" level=info msg="Daemon has completed initialization" May 13 23:59:31.272015 dockerd[2183]: time="2025-05-13T23:59:31.271950208Z" level=info msg="API listen on /run/docker.sock" May 13 23:59:31.272441 systemd[1]: Started docker.service - Docker Application Container Engine. May 13 23:59:33.353470 containerd[1723]: time="2025-05-13T23:59:33.353429933Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\"" May 13 23:59:34.031814 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount149692746.mount: Deactivated successfully. May 13 23:59:35.689499 containerd[1723]: time="2025-05-13T23:59:35.689438304Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:35.693105 containerd[1723]: time="2025-05-13T23:59:35.692992556Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.30.12: active requests=0, bytes read=32674881" May 13 23:59:35.695780 containerd[1723]: time="2025-05-13T23:59:35.695697996Z" level=info msg="ImageCreate event name:\"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:35.701706 containerd[1723]: time="2025-05-13T23:59:35.701674284Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:35.703299 containerd[1723]: time="2025-05-13T23:59:35.702664598Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.30.12\" with image id \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\", repo tag \"registry.k8s.io/kube-apiserver:v1.30.12\", repo digest \"registry.k8s.io/kube-apiserver@sha256:4878682f7a044274d42399a6316ef452c5411aafd4ad99cc57de7235ca490e4e\", size \"32671673\" in 2.349195064s" May 13 23:59:35.703299 containerd[1723]: time="2025-05-13T23:59:35.702705399Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.30.12\" returns image reference \"sha256:e113c59aa22f0650435e2a3ed64aadb01e87f3d2835aa3825fe078cd39699bfb\"" May 13 23:59:35.719658 containerd[1723]: time="2025-05-13T23:59:35.719624848Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\"" May 13 23:59:37.465187 containerd[1723]: time="2025-05-13T23:59:37.465129630Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:37.470282 containerd[1723]: time="2025-05-13T23:59:37.470204404Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.30.12: active requests=0, bytes read=29617542" May 13 23:59:37.473492 containerd[1723]: time="2025-05-13T23:59:37.473431052Z" level=info msg="ImageCreate event name:\"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:37.483124 containerd[1723]: time="2025-05-13T23:59:37.483077594Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:37.484105 containerd[1723]: time="2025-05-13T23:59:37.483946407Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.30.12\" with image id \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\", repo tag \"registry.k8s.io/kube-controller-manager:v1.30.12\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:3a36711d0409d565b370a18d0c19339e93d4f1b1f2b3fd382eb31c714c463b74\", size \"31105907\" in 1.764273459s" May 13 23:59:37.484105 containerd[1723]: time="2025-05-13T23:59:37.483991407Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.30.12\" returns image reference \"sha256:70742b7b7d90a618a1fa06d89248dbe2c291c19d7f75f4ad60a69d0454dbbac8\"" May 13 23:59:37.502399 containerd[1723]: time="2025-05-13T23:59:37.502362178Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\"" May 13 23:59:38.769403 containerd[1723]: time="2025-05-13T23:59:38.769349603Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:38.771362 containerd[1723]: time="2025-05-13T23:59:38.771290947Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.30.12: active requests=0, bytes read=17903690" May 13 23:59:38.775490 containerd[1723]: time="2025-05-13T23:59:38.775433241Z" level=info msg="ImageCreate event name:\"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:38.780190 containerd[1723]: time="2025-05-13T23:59:38.780124647Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:38.781315 containerd[1723]: time="2025-05-13T23:59:38.780990866Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.30.12\" with image id \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\", repo tag \"registry.k8s.io/kube-scheduler:v1.30.12\", repo digest \"registry.k8s.io/kube-scheduler@sha256:521c843d01025be7d4e246ddee8cde74556eb9813c606d6db9f0f03236f6d029\", size \"19392073\" in 1.278588688s" May 13 23:59:38.781315 containerd[1723]: time="2025-05-13T23:59:38.781031267Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.30.12\" returns image reference \"sha256:c0b91cfea9f9a1c09fc5d056f3a015e52604fd0d63671ff5bf31e642402ef05d\"" May 13 23:59:38.798542 containerd[1723]: time="2025-05-13T23:59:38.798499461Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\"" May 13 23:59:40.009583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. May 13 23:59:40.014628 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:40.065903 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount686158096.mount: Deactivated successfully. May 13 23:59:40.145338 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:40.153726 (kubelet)[2659]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS May 13 23:59:40.193745 kubelet[2659]: E0513 23:59:40.193655 2659 run.go:74] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" May 13 23:59:40.196190 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE May 13 23:59:40.196378 systemd[1]: kubelet.service: Failed with result 'exit-code'. May 13 23:59:40.196790 systemd[1]: kubelet.service: Consumed 152ms CPU time, 94M memory peak. May 13 23:59:41.120574 containerd[1723]: time="2025-05-13T23:59:41.120515904Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.30.12\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:41.122850 containerd[1723]: time="2025-05-13T23:59:41.122779856Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.30.12: active requests=0, bytes read=29185825" May 13 23:59:41.126546 containerd[1723]: time="2025-05-13T23:59:41.126482039Z" level=info msg="ImageCreate event name:\"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:41.130324 containerd[1723]: time="2025-05-13T23:59:41.130270825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:41.130957 containerd[1723]: time="2025-05-13T23:59:41.130759536Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.30.12\" with image id \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\", repo tag \"registry.k8s.io/kube-proxy:v1.30.12\", repo digest \"registry.k8s.io/kube-proxy@sha256:ea8c7d5392acf6b0c11ebba78301e1a6c2dc6abcd7544102ed578e49d1c82f15\", size \"29184836\" in 2.332212074s" May 13 23:59:41.130957 containerd[1723]: time="2025-05-13T23:59:41.130799037Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.30.12\" returns image reference \"sha256:c9356fea5d151501039907c3ba870272461396117eabc74063632616f4e31b2b\"" May 13 23:59:41.147864 containerd[1723]: time="2025-05-13T23:59:41.147818621Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\"" May 13 23:59:41.790998 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3066735201.mount: Deactivated successfully. May 13 23:59:42.881687 containerd[1723]: time="2025-05-13T23:59:42.881637079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:42.885488 containerd[1723]: time="2025-05-13T23:59:42.885422765Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.1: active requests=0, bytes read=18185769" May 13 23:59:42.889792 containerd[1723]: time="2025-05-13T23:59:42.889689661Z" level=info msg="ImageCreate event name:\"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:42.895904 containerd[1723]: time="2025-05-13T23:59:42.895844800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:42.896862 containerd[1723]: time="2025-05-13T23:59:42.896715120Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.1\" with image id \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:1eeb4c7316bacb1d4c8ead65571cd92dd21e27359f0d4917f1a5822a73b75db1\", size \"18182961\" in 1.748856898s" May 13 23:59:42.896862 containerd[1723]: time="2025-05-13T23:59:42.896755721Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.1\" returns image reference \"sha256:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4\"" May 13 23:59:42.913563 containerd[1723]: time="2025-05-13T23:59:42.913520299Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\"" May 13 23:59:43.451553 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1201541237.mount: Deactivated successfully. May 13 23:59:43.482327 containerd[1723]: time="2025-05-13T23:59:43.482272145Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:43.486011 containerd[1723]: time="2025-05-13T23:59:43.485932327Z" level=info msg="stop pulling image registry.k8s.io/pause:3.9: active requests=0, bytes read=322298" May 13 23:59:43.492480 containerd[1723]: time="2025-05-13T23:59:43.492425074Z" level=info msg="ImageCreate event name:\"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:43.497371 containerd[1723]: time="2025-05-13T23:59:43.497320585Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:43.498169 containerd[1723]: time="2025-05-13T23:59:43.498010100Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.9\" with image id \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\", repo tag \"registry.k8s.io/pause:3.9\", repo digest \"registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097\", size \"321520\" in 584.4523ms" May 13 23:59:43.498169 containerd[1723]: time="2025-05-13T23:59:43.498047501Z" level=info msg="PullImage \"registry.k8s.io/pause:3.9\" returns image reference \"sha256:e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c\"" May 13 23:59:43.515942 containerd[1723]: time="2025-05-13T23:59:43.515908204Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\"" May 13 23:59:44.508121 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2379381655.mount: Deactivated successfully. May 13 23:59:47.817924 containerd[1723]: time="2025-05-13T23:59:47.817868969Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.12-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:47.820132 containerd[1723]: time="2025-05-13T23:59:47.820067606Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.12-0: active requests=0, bytes read=57238579" May 13 23:59:47.824233 containerd[1723]: time="2025-05-13T23:59:47.824180175Z" level=info msg="ImageCreate event name:\"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:47.829471 containerd[1723]: time="2025-05-13T23:59:47.829414863Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 13 23:59:47.830805 containerd[1723]: time="2025-05-13T23:59:47.830329879Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.12-0\" with image id \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\", repo tag \"registry.k8s.io/etcd:3.5.12-0\", repo digest \"registry.k8s.io/etcd@sha256:44a8e24dcbba3470ee1fee21d5e88d128c936e9b55d4bc51fbef8086f8ed123b\", size \"57236178\" in 4.314383474s" May 13 23:59:47.830805 containerd[1723]: time="2025-05-13T23:59:47.830371579Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.12-0\" returns image reference \"sha256:3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899\"" May 13 23:59:50.259554 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 6. May 13 23:59:50.264635 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:51.671087 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:59:51.671224 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:59:51.671644 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:51.682380 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 13 23:59:51.698565 systemd[1]: Reload requested from client PID 2872 ('systemctl') (unit session-7.scope)... May 13 23:59:51.698583 systemd[1]: Reloading... May 13 23:59:51.804432 zram_generator::config[2920]: No configuration found. May 13 23:59:51.926477 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 13 23:59:52.044633 systemd[1]: Reloading finished in 345 ms. May 13 23:59:56.976008 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM May 13 23:59:56.976505 systemd[1]: kubelet.service: Failed with result 'signal'. May 13 23:59:56.976919 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 13 23:59:56.980777 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:00.011727 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. May 14 00:00:03.366786 systemd[1]: logrotate.service: Deactivated successfully. May 14 00:00:03.758207 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:03.770749 (kubelet)[2989]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:03.808210 kubelet[2989]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:03.808210 kubelet[2989]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:03.808210 kubelet[2989]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:03.808702 kubelet[2989]: I0514 00:00:03.808257 2989 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:04.445158 kubelet[2989]: I0514 00:00:04.445115 2989 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:04.445158 kubelet[2989]: I0514 00:00:04.445147 2989 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:04.445474 kubelet[2989]: I0514 00:00:04.445451 2989 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:04.460170 kubelet[2989]: I0514 00:00:04.460136 2989 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:04.460561 kubelet[2989]: E0514 00:00:04.460468 2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.470115 kubelet[2989]: I0514 00:00:04.470086 2989 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:04.472137 kubelet[2989]: I0514 00:00:04.472079 2989 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:04.472331 kubelet[2989]: I0514 00:00:04.472130 2989 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-b62cb48025","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:04.472492 kubelet[2989]: I0514 00:00:04.472345 2989 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:04.472492 kubelet[2989]: I0514 00:00:04.472360 2989 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:04.472570 kubelet[2989]: I0514 00:00:04.472524 2989 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:04.473377 kubelet[2989]: I0514 00:00:04.473356 2989 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:04.473470 kubelet[2989]: I0514 00:00:04.473379 2989 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:04.473470 kubelet[2989]: I0514 00:00:04.473419 2989 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:04.473470 kubelet[2989]: I0514 00:00:04.473441 2989 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:04.477810 kubelet[2989]: W0514 00:00:04.477756 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.478057 kubelet[2989]: E0514 00:00:04.477996 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.478144 kubelet[2989]: I0514 00:00:04.478093 2989 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:04.481740 kubelet[2989]: I0514 00:00:04.480521 2989 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:04.481740 kubelet[2989]: W0514 00:00:04.480591 2989 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. May 14 00:00:04.481740 kubelet[2989]: I0514 00:00:04.481233 2989 server.go:1264] "Started kubelet" May 14 00:00:04.486822 kubelet[2989]: I0514 00:00:04.486792 2989 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:04.492293 kubelet[2989]: W0514 00:00:04.492237 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.492861 kubelet[2989]: E0514 00:00:04.492475 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.492861 kubelet[2989]: E0514 00:00:04.492560 2989 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.200.8.49:6443/api/v1/namespaces/default/events\": dial tcp 10.200.8.49:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4284.0.0-n-b62cb48025.183f3baf497bcc60 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4284.0.0-n-b62cb48025,UID:ci-4284.0.0-n-b62cb48025,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4284.0.0-n-b62cb48025,},FirstTimestamp:2025-05-14 00:00:04.481207392 +0000 UTC m=+0.707181992,LastTimestamp:2025-05-14 00:00:04.481207392 +0000 UTC m=+0.707181992,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4284.0.0-n-b62cb48025,}" May 14 00:00:04.496746 kubelet[2989]: I0514 00:00:04.496019 2989 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:04.496746 kubelet[2989]: I0514 00:00:04.496395 2989 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:04.496877 kubelet[2989]: I0514 00:00:04.496783 2989 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:04.497292 kubelet[2989]: I0514 00:00:04.497252 2989 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:04.498131 kubelet[2989]: I0514 00:00:04.498107 2989 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:04.501088 kubelet[2989]: I0514 00:00:04.501072 2989 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:04.501287 kubelet[2989]: I0514 00:00:04.501264 2989 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:04.502326 kubelet[2989]: E0514 00:00:04.502279 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b62cb48025?timeout=10s\": dial tcp 10.200.8.49:6443: connect: connection refused" interval="200ms" May 14 00:00:04.502596 kubelet[2989]: I0514 00:00:04.502569 2989 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:04.503745 kubelet[2989]: E0514 00:00:04.503714 2989 kubelet.go:1467] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" May 14 00:00:04.504002 kubelet[2989]: I0514 00:00:04.503975 2989 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:04.504002 kubelet[2989]: I0514 00:00:04.503997 2989 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:04.513764 kubelet[2989]: I0514 00:00:04.513726 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:04.515246 kubelet[2989]: I0514 00:00:04.515224 2989 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:04.515727 kubelet[2989]: I0514 00:00:04.515387 2989 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:04.515727 kubelet[2989]: I0514 00:00:04.515436 2989 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:04.515727 kubelet[2989]: E0514 00:00:04.515486 2989 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:04.521763 kubelet[2989]: W0514 00:00:04.521713 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.521852 kubelet[2989]: E0514 00:00:04.521776 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.521901 kubelet[2989]: W0514 00:00:04.521865 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.521948 kubelet[2989]: E0514 00:00:04.521914 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:04.533071 kubelet[2989]: I0514 00:00:04.533051 2989 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:04.533189 kubelet[2989]: I0514 00:00:04.533077 2989 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:04.533189 kubelet[2989]: I0514 00:00:04.533099 2989 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:04.600159 kubelet[2989]: I0514 00:00:04.600126 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:04.600510 kubelet[2989]: E0514 00:00:04.600479 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.49:6443/api/v1/nodes\": dial tcp 10.200.8.49:6443: connect: connection refused" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:04.615906 kubelet[2989]: E0514 00:00:04.615862 2989 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:04.703854 kubelet[2989]: E0514 00:00:04.703715 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b62cb48025?timeout=10s\": dial tcp 10.200.8.49:6443: connect: connection refused" interval="400ms" May 14 00:00:04.802763 kubelet[2989]: I0514 00:00:04.802734 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:04.803149 kubelet[2989]: E0514 00:00:04.803113 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.49:6443/api/v1/nodes\": dial tcp 10.200.8.49:6443: connect: connection refused" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:04.816335 kubelet[2989]: E0514 00:00:04.816302 2989 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:05.104827 kubelet[2989]: E0514 00:00:05.104766 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b62cb48025?timeout=10s\": dial tcp 10.200.8.49:6443: connect: connection refused" interval="800ms" May 14 00:00:05.205036 kubelet[2989]: I0514 00:00:05.204992 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:05.205364 kubelet[2989]: E0514 00:00:05.205337 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.49:6443/api/v1/nodes\": dial tcp 10.200.8.49:6443: connect: connection refused" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:05.216541 kubelet[2989]: E0514 00:00:05.216511 2989 kubelet.go:2361] "Skipping pod synchronization" err="container runtime status check may not have completed yet" May 14 00:00:05.326924 kubelet[2989]: W0514 00:00:05.326798 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:05.326924 kubelet[2989]: E0514 00:00:05.326872 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:05.329894 kubelet[2989]: I0514 00:00:05.329867 2989 policy_none.go:49] "None policy: Start" May 14 00:00:05.330739 kubelet[2989]: I0514 00:00:05.330720 2989 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:05.330828 kubelet[2989]: I0514 00:00:05.330764 2989 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:05.341230 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. May 14 00:00:05.351252 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. May 14 00:00:05.355499 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. May 14 00:00:05.366145 kubelet[2989]: I0514 00:00:05.366118 2989 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:05.366518 kubelet[2989]: I0514 00:00:05.366340 2989 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:05.366518 kubelet[2989]: I0514 00:00:05.366484 2989 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:05.369100 kubelet[2989]: E0514 00:00:05.369069 2989 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:05.771280 kubelet[2989]: W0514 00:00:05.771238 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:05.771280 kubelet[2989]: E0514 00:00:05.771286 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:05.905732 kubelet[2989]: E0514 00:00:05.905672 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b62cb48025?timeout=10s\": dial tcp 10.200.8.49:6443: connect: connection refused" interval="1.6s" May 14 00:00:05.909072 kubelet[2989]: W0514 00:00:05.909035 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:05.909184 kubelet[2989]: E0514 00:00:05.909082 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:06.008055 kubelet[2989]: I0514 00:00:06.007994 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:06.008418 kubelet[2989]: E0514 00:00:06.008371 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.49:6443/api/v1/nodes\": dial tcp 10.200.8.49:6443: connect: connection refused" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:06.017633 kubelet[2989]: I0514 00:00:06.017598 2989 topology_manager.go:215] "Topology Admit Handler" podUID="80463a3f309db1cd6cb9f033e73be959" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.019033 kubelet[2989]: I0514 00:00:06.019004 2989 topology_manager.go:215] "Topology Admit Handler" podUID="faeefa8293b96b1a1ac42caedc098157" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.020589 kubelet[2989]: I0514 00:00:06.020208 2989 topology_manager.go:215] "Topology Admit Handler" podUID="0a188334863690725ee8eaf11240c285" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.027393 systemd[1]: Created slice kubepods-burstable-pod80463a3f309db1cd6cb9f033e73be959.slice - libcontainer container kubepods-burstable-pod80463a3f309db1cd6cb9f033e73be959.slice. May 14 00:00:06.040991 systemd[1]: Created slice kubepods-burstable-podfaeefa8293b96b1a1ac42caedc098157.slice - libcontainer container kubepods-burstable-podfaeefa8293b96b1a1ac42caedc098157.slice. May 14 00:00:06.051310 systemd[1]: Created slice kubepods-burstable-pod0a188334863690725ee8eaf11240c285.slice - libcontainer container kubepods-burstable-pod0a188334863690725ee8eaf11240c285.slice. May 14 00:00:06.111607 kubelet[2989]: I0514 00:00:06.111551 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111607 kubelet[2989]: I0514 00:00:06.111605 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111862 kubelet[2989]: I0514 00:00:06.111635 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111862 kubelet[2989]: I0514 00:00:06.111658 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111862 kubelet[2989]: I0514 00:00:06.111689 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111862 kubelet[2989]: I0514 00:00:06.111712 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a188334863690725ee8eaf11240c285-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-b62cb48025\" (UID: \"0a188334863690725ee8eaf11240c285\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.111862 kubelet[2989]: I0514 00:00:06.111735 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.112057 kubelet[2989]: I0514 00:00:06.111762 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.112057 kubelet[2989]: I0514 00:00:06.111787 2989 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:06.114027 kubelet[2989]: W0514 00:00:06.113991 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:06.114148 kubelet[2989]: E0514 00:00:06.114035 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:06.339336 containerd[1723]: time="2025-05-14T00:00:06.339194369Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-b62cb48025,Uid:80463a3f309db1cd6cb9f033e73be959,Namespace:kube-system,Attempt:0,}" May 14 00:00:06.350211 containerd[1723]: time="2025-05-14T00:00:06.349883658Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-b62cb48025,Uid:faeefa8293b96b1a1ac42caedc098157,Namespace:kube-system,Attempt:0,}" May 14 00:00:06.354277 containerd[1723]: time="2025-05-14T00:00:06.354243895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-b62cb48025,Uid:0a188334863690725ee8eaf11240c285,Namespace:kube-system,Attempt:0,}" May 14 00:00:06.464122 kubelet[2989]: E0514 00:00:06.464076 2989 certificate_manager.go:562] kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post "https://10.200.8.49:6443/apis/certificates.k8s.io/v1/certificatesigningrequests": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:06.918504 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1350802609.mount: Deactivated successfully. May 14 00:00:07.074306 containerd[1723]: time="2025-05-14T00:00:07.074234292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:07.174317 containerd[1723]: time="2025-05-14T00:00:07.174164825Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321146" May 14 00:00:07.218624 containerd[1723]: time="2025-05-14T00:00:07.218507594Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:07.241295 kubelet[2989]: W0514 00:00:07.241248 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:07.241295 kubelet[2989]: E0514 00:00:07.241299 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.200.8.49:6443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:07.267764 containerd[1723]: time="2025-05-14T00:00:07.267664804Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:07.377456 containerd[1723]: time="2025-05-14T00:00:07.377158416Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:07.423681 containerd[1723]: time="2025-05-14T00:00:07.423616903Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:07.474008 containerd[1723]: time="2025-05-14T00:00:07.473801821Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" May 14 00:00:07.475478 containerd[1723]: time="2025-05-14T00:00:07.474676628Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.129019605s" May 14 00:00:07.509249 kubelet[2989]: E0514 00:00:07.509185 2989 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.200.8.49:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4284.0.0-n-b62cb48025?timeout=10s\": dial tcp 10.200.8.49:6443: connect: connection refused" interval="3.2s" May 14 00:00:07.516723 containerd[1723]: time="2025-05-14T00:00:07.516650278Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" May 14 00:00:07.517381 containerd[1723]: time="2025-05-14T00:00:07.517346084Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.144555835s" May 14 00:00:07.611330 kubelet[2989]: I0514 00:00:07.611296 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:07.611717 kubelet[2989]: E0514 00:00:07.611675 2989 kubelet_node_status.go:96] "Unable to register node with API server" err="Post \"https://10.200.8.49:6443/api/v1/nodes\": dial tcp 10.200.8.49:6443: connect: connection refused" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:07.818751 containerd[1723]: time="2025-05-14T00:00:07.818677966Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 1.461121844s" May 14 00:00:08.083550 containerd[1723]: time="2025-05-14T00:00:08.083127653Z" level=info msg="connecting to shim 672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70" address="unix:///run/containerd/s/f1ccb20047fc6c117d08cbd77847567041396efe0912ed25ffb18bd7a3143c5e" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:08.107919 systemd[1]: Started cri-containerd-672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70.scope - libcontainer container 672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70. May 14 00:00:08.136301 containerd[1723]: time="2025-05-14T00:00:08.135698366Z" level=info msg="connecting to shim e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331" address="unix:///run/containerd/s/7b634561facc820c3d20cae0e4ab54395b0deff006c223c5bdf47996e7970fce" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:08.169617 systemd[1]: Started cri-containerd-e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331.scope - libcontainer container e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331. May 14 00:00:08.282217 containerd[1723]: time="2025-05-14T00:00:08.282159529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4284.0.0-n-b62cb48025,Uid:80463a3f309db1cd6cb9f033e73be959,Namespace:kube-system,Attempt:0,} returns sandbox id \"672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70\"" May 14 00:00:08.286583 containerd[1723]: time="2025-05-14T00:00:08.286541997Z" level=info msg="CreateContainer within sandbox \"672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" May 14 00:00:08.331169 containerd[1723]: time="2025-05-14T00:00:08.331100885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4284.0.0-n-b62cb48025,Uid:0a188334863690725ee8eaf11240c285,Namespace:kube-system,Attempt:0,} returns sandbox id \"e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331\"" May 14 00:00:08.334314 containerd[1723]: time="2025-05-14T00:00:08.334109532Z" level=info msg="CreateContainer within sandbox \"e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" May 14 00:00:08.421477 kubelet[2989]: W0514 00:00:08.421349 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:08.421900 kubelet[2989]: E0514 00:00:08.421493 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://10.200.8.49:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4284.0.0-n-b62cb48025&limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:08.590295 containerd[1723]: time="2025-05-14T00:00:08.590145289Z" level=info msg="connecting to shim 8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90" address="unix:///run/containerd/s/5ede689ee3cc643f0f2450be017b4528a49bd113d4fce8f2cf263e78734d8cde" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:08.642574 systemd[1]: Started cri-containerd-8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90.scope - libcontainer container 8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90. May 14 00:00:08.766026 containerd[1723]: time="2025-05-14T00:00:08.765963706Z" level=info msg="Container 6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:08.815872 containerd[1723]: time="2025-05-14T00:00:08.815810977Z" level=info msg="Container d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:08.849338 kubelet[2989]: W0514 00:00:08.849183 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:08.849338 kubelet[2989]: E0514 00:00:08.849262 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://10.200.8.49:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:08.881426 containerd[1723]: time="2025-05-14T00:00:08.879640763Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4284.0.0-n-b62cb48025,Uid:faeefa8293b96b1a1ac42caedc098157,Namespace:kube-system,Attempt:0,} returns sandbox id \"8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90\"" May 14 00:00:08.885514 containerd[1723]: time="2025-05-14T00:00:08.885461053Z" level=info msg="CreateContainer within sandbox \"8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" May 14 00:00:09.129189 containerd[1723]: time="2025-05-14T00:00:09.129045818Z" level=info msg="CreateContainer within sandbox \"e8d8e5161c7e95c34cf79ee60970a20d541f56b9f7d4e3dd559bafd5f3905331\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00\"" May 14 00:00:09.129965 containerd[1723]: time="2025-05-14T00:00:09.129906431Z" level=info msg="StartContainer for \"d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00\"" May 14 00:00:09.131217 containerd[1723]: time="2025-05-14T00:00:09.131176551Z" level=info msg="connecting to shim d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00" address="unix:///run/containerd/s/7b634561facc820c3d20cae0e4ab54395b0deff006c223c5bdf47996e7970fce" protocol=ttrpc version=3 May 14 00:00:09.156577 systemd[1]: Started cri-containerd-d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00.scope - libcontainer container d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00. May 14 00:00:09.253975 kubelet[2989]: W0514 00:00:09.253869 2989 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:09.253975 kubelet[2989]: E0514 00:00:09.253948 2989 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get "https://10.200.8.49:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.200.8.49:6443: connect: connection refused May 14 00:00:09.273901 containerd[1723]: time="2025-05-14T00:00:09.273620752Z" level=info msg="CreateContainer within sandbox \"672bc4e5caae81c00f2b40578060141260d79efe3bea0c85aec62d5a30d73f70\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c\"" May 14 00:00:09.274209 containerd[1723]: time="2025-05-14T00:00:09.273728154Z" level=info msg="StartContainer for \"d65df08325186cae3804b05d3954fb65535e9e369579e3d0bbb11f017418ee00\" returns successfully" May 14 00:00:09.275178 containerd[1723]: time="2025-05-14T00:00:09.275126075Z" level=info msg="StartContainer for \"6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c\"" May 14 00:00:09.276939 containerd[1723]: time="2025-05-14T00:00:09.276795701Z" level=info msg="connecting to shim 6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c" address="unix:///run/containerd/s/f1ccb20047fc6c117d08cbd77847567041396efe0912ed25ffb18bd7a3143c5e" protocol=ttrpc version=3 May 14 00:00:09.311806 systemd[1]: Started cri-containerd-6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c.scope - libcontainer container 6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c. May 14 00:00:09.370583 containerd[1723]: time="2025-05-14T00:00:09.370538550Z" level=info msg="Container 909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:09.429507 containerd[1723]: time="2025-05-14T00:00:09.429000053Z" level=info msg="StartContainer for \"6e156d33042b83a468f074a92ad8f9fb133307863e50f3dea4a2d31eb648b42c\" returns successfully" May 14 00:00:09.524802 containerd[1723]: time="2025-05-14T00:00:09.524701932Z" level=info msg="CreateContainer within sandbox \"8ac7226da7267a43707d02e05bd7bdfcac0163c550d5492c7fc344cb5faf2f90\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5\"" May 14 00:00:09.525722 containerd[1723]: time="2025-05-14T00:00:09.525688848Z" level=info msg="StartContainer for \"909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5\"" May 14 00:00:09.527125 containerd[1723]: time="2025-05-14T00:00:09.527093369Z" level=info msg="connecting to shim 909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5" address="unix:///run/containerd/s/5ede689ee3cc643f0f2450be017b4528a49bd113d4fce8f2cf263e78734d8cde" protocol=ttrpc version=3 May 14 00:00:09.567602 systemd[1]: Started cri-containerd-909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5.scope - libcontainer container 909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5. May 14 00:00:09.706356 containerd[1723]: time="2025-05-14T00:00:09.705877032Z" level=info msg="StartContainer for \"909f61d1ddf66ad0e826d2c544515182fb67cfdf56d7562a766733a4c7c40bc5\" returns successfully" May 14 00:00:10.815285 kubelet[2989]: I0514 00:00:10.814924 2989 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:11.741084 kubelet[2989]: E0514 00:00:11.741016 2989 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4284.0.0-n-b62cb48025\" not found" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:11.808301 kubelet[2989]: I0514 00:00:11.808252 2989 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:11.830716 kubelet[2989]: E0514 00:00:11.830676 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:11.930979 kubelet[2989]: E0514 00:00:11.930841 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.031286 kubelet[2989]: E0514 00:00:12.031218 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.131520 kubelet[2989]: E0514 00:00:12.131482 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.232528 kubelet[2989]: E0514 00:00:12.232479 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.333134 kubelet[2989]: E0514 00:00:12.332998 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.433913 kubelet[2989]: E0514 00:00:12.433858 2989 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ci-4284.0.0-n-b62cb48025\" not found" May 14 00:00:12.483504 kubelet[2989]: I0514 00:00:12.483453 2989 apiserver.go:52] "Watching apiserver" May 14 00:00:12.501999 kubelet[2989]: I0514 00:00:12.501964 2989 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:14.019739 systemd[1]: Reload requested from client PID 3261 ('systemctl') (unit session-7.scope)... May 14 00:00:14.019754 systemd[1]: Reloading... May 14 00:00:14.140561 zram_generator::config[3311]: No configuration found. May 14 00:00:14.269510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. May 14 00:00:14.433340 systemd[1]: Reloading finished in 413 ms. May 14 00:00:14.467926 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:14.484281 systemd[1]: kubelet.service: Deactivated successfully. May 14 00:00:14.484519 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:14.484582 systemd[1]: kubelet.service: Consumed 867ms CPU time, 112.3M memory peak. May 14 00:00:14.487833 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... May 14 00:00:17.205293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. May 14 00:00:17.214811 (kubelet)[3375]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS May 14 00:00:17.268527 kubelet[3375]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:17.268527 kubelet[3375]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. May 14 00:00:17.268527 kubelet[3375]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. May 14 00:00:17.268997 kubelet[3375]: I0514 00:00:17.268614 3375 server.go:205] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" May 14 00:00:17.274556 kubelet[3375]: I0514 00:00:17.274530 3375 server.go:484] "Kubelet version" kubeletVersion="v1.30.1" May 14 00:00:17.274679 kubelet[3375]: I0514 00:00:17.274671 3375 server.go:486] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" May 14 00:00:17.274896 kubelet[3375]: I0514 00:00:17.274883 3375 server.go:927] "Client rotation is on, will bootstrap in background" May 14 00:00:17.276201 kubelet[3375]: I0514 00:00:17.276181 3375 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". May 14 00:00:17.277368 kubelet[3375]: I0514 00:00:17.277341 3375 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" May 14 00:00:17.285810 kubelet[3375]: I0514 00:00:17.285784 3375 server.go:742] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" May 14 00:00:17.286088 kubelet[3375]: I0514 00:00:17.286045 3375 container_manager_linux.go:265] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] May 14 00:00:17.286259 kubelet[3375]: I0514 00:00:17.286084 3375 container_manager_linux.go:270] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4284.0.0-n-b62cb48025","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null} May 14 00:00:17.286397 kubelet[3375]: I0514 00:00:17.286278 3375 topology_manager.go:138] "Creating topology manager with none policy" May 14 00:00:17.286397 kubelet[3375]: I0514 00:00:17.286292 3375 container_manager_linux.go:301] "Creating device plugin manager" May 14 00:00:17.286397 kubelet[3375]: I0514 00:00:17.286346 3375 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:17.286607 kubelet[3375]: I0514 00:00:17.286476 3375 kubelet.go:400] "Attempting to sync node with API server" May 14 00:00:17.286607 kubelet[3375]: I0514 00:00:17.286493 3375 kubelet.go:301] "Adding static pod path" path="/etc/kubernetes/manifests" May 14 00:00:17.286607 kubelet[3375]: I0514 00:00:17.286519 3375 kubelet.go:312] "Adding apiserver pod source" May 14 00:00:17.286607 kubelet[3375]: I0514 00:00:17.286540 3375 apiserver.go:42] "Waiting for node sync before watching apiserver pods" May 14 00:00:17.291433 kubelet[3375]: I0514 00:00:17.291217 3375 kuberuntime_manager.go:261] "Container runtime initialized" containerRuntime="containerd" version="v2.0.1" apiVersion="v1" May 14 00:00:17.291433 kubelet[3375]: I0514 00:00:17.291395 3375 kubelet.go:815] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" May 14 00:00:17.292100 kubelet[3375]: I0514 00:00:17.292084 3375 server.go:1264] "Started kubelet" May 14 00:00:17.297707 kubelet[3375]: I0514 00:00:17.297687 3375 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" May 14 00:00:17.308453 kubelet[3375]: I0514 00:00:17.306478 3375 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 May 14 00:00:17.308453 kubelet[3375]: I0514 00:00:17.307576 3375 server.go:455] "Adding debug handlers to kubelet server" May 14 00:00:17.314443 kubelet[3375]: I0514 00:00:17.314368 3375 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 May 14 00:00:17.314686 kubelet[3375]: I0514 00:00:17.314665 3375 server.go:227] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" May 14 00:00:17.318531 kubelet[3375]: I0514 00:00:17.318509 3375 volume_manager.go:291] "Starting Kubelet Volume Manager" May 14 00:00:17.320300 kubelet[3375]: I0514 00:00:17.320259 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" May 14 00:00:17.322738 kubelet[3375]: I0514 00:00:17.322675 3375 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" May 14 00:00:17.322846 kubelet[3375]: I0514 00:00:17.322835 3375 status_manager.go:217] "Starting to sync pod status with apiserver" May 14 00:00:17.322933 kubelet[3375]: I0514 00:00:17.322924 3375 kubelet.go:2337] "Starting kubelet main sync loop" May 14 00:00:17.323044 kubelet[3375]: E0514 00:00:17.323025 3375 kubelet.go:2361] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" May 14 00:00:17.329325 kubelet[3375]: I0514 00:00:17.329151 3375 desired_state_of_world_populator.go:149] "Desired state populator starts to run" May 14 00:00:17.329496 kubelet[3375]: I0514 00:00:17.329335 3375 reconciler.go:26] "Reconciler: start to sync state" May 14 00:00:17.338673 kubelet[3375]: I0514 00:00:17.338571 3375 factory.go:221] Registration of the containerd container factory successfully May 14 00:00:17.338673 kubelet[3375]: I0514 00:00:17.338592 3375 factory.go:221] Registration of the systemd container factory successfully May 14 00:00:17.339165 kubelet[3375]: I0514 00:00:17.338687 3375 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory May 14 00:00:17.397126 kubelet[3375]: I0514 00:00:17.397090 3375 cpu_manager.go:214] "Starting CPU manager" policy="none" May 14 00:00:17.397126 kubelet[3375]: I0514 00:00:17.397118 3375 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" May 14 00:00:17.397519 kubelet[3375]: I0514 00:00:17.397142 3375 state_mem.go:36] "Initialized new in-memory state store" May 14 00:00:17.397519 kubelet[3375]: I0514 00:00:17.397324 3375 state_mem.go:88] "Updated default CPUSet" cpuSet="" May 14 00:00:17.397519 kubelet[3375]: I0514 00:00:17.397339 3375 state_mem.go:96] "Updated CPUSet assignments" assignments={} May 14 00:00:17.397519 kubelet[3375]: I0514 00:00:17.397360 3375 policy_none.go:49] "None policy: Start" May 14 00:00:17.398157 kubelet[3375]: I0514 00:00:17.398134 3375 memory_manager.go:170] "Starting memorymanager" policy="None" May 14 00:00:17.398157 kubelet[3375]: I0514 00:00:17.398158 3375 state_mem.go:35] "Initializing new in-memory state store" May 14 00:00:17.398324 kubelet[3375]: I0514 00:00:17.398311 3375 state_mem.go:75] "Updated machine memory state" May 14 00:00:17.403550 kubelet[3375]: I0514 00:00:17.403522 3375 manager.go:479] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" May 14 00:00:17.403960 kubelet[3375]: I0514 00:00:17.403840 3375 container_log_manager.go:186] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" May 14 00:00:17.404676 kubelet[3375]: I0514 00:00:17.404626 3375 plugin_manager.go:118] "Starting Kubelet Plugin Manager" May 14 00:00:17.423464 kubelet[3375]: I0514 00:00:17.423390 3375 kubelet_node_status.go:73] "Attempting to register node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:17.424913 kubelet[3375]: I0514 00:00:17.423859 3375 topology_manager.go:215] "Topology Admit Handler" podUID="80463a3f309db1cd6cb9f033e73be959" podNamespace="kube-system" podName="kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.424913 kubelet[3375]: I0514 00:00:17.424000 3375 topology_manager.go:215] "Topology Admit Handler" podUID="faeefa8293b96b1a1ac42caedc098157" podNamespace="kube-system" podName="kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.424913 kubelet[3375]: I0514 00:00:17.424169 3375 topology_manager.go:215] "Topology Admit Handler" podUID="0a188334863690725ee8eaf11240c285" podNamespace="kube-system" podName="kube-scheduler-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.446695 kubelet[3375]: W0514 00:00:17.446093 3375 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:00:17.446695 kubelet[3375]: W0514 00:00:17.446361 3375 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:00:17.446695 kubelet[3375]: W0514 00:00:17.446394 3375 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:00:17.448173 kubelet[3375]: I0514 00:00:17.448148 3375 kubelet_node_status.go:112] "Node was previously registered" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:17.448336 kubelet[3375]: I0514 00:00:17.448328 3375 kubelet_node_status.go:76] "Successfully registered node" node="ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531450 kubelet[3375]: I0514 00:00:17.531097 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531450 kubelet[3375]: I0514 00:00:17.531156 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/0a188334863690725ee8eaf11240c285-kubeconfig\") pod \"kube-scheduler-ci-4284.0.0-n-b62cb48025\" (UID: \"0a188334863690725ee8eaf11240c285\") " pod="kube-system/kube-scheduler-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531450 kubelet[3375]: I0514 00:00:17.531182 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-ca-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531450 kubelet[3375]: I0514 00:00:17.531205 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-k8s-certs\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531450 kubelet[3375]: I0514 00:00:17.531235 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/80463a3f309db1cd6cb9f033e73be959-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" (UID: \"80463a3f309db1cd6cb9f033e73be959\") " pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531803 kubelet[3375]: I0514 00:00:17.531260 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-ca-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531803 kubelet[3375]: I0514 00:00:17.531284 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-flexvolume-dir\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531803 kubelet[3375]: I0514 00:00:17.531307 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-k8s-certs\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:17.531803 kubelet[3375]: I0514 00:00:17.531332 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/faeefa8293b96b1a1ac42caedc098157-kubeconfig\") pod \"kube-controller-manager-ci-4284.0.0-n-b62cb48025\" (UID: \"faeefa8293b96b1a1ac42caedc098157\") " pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" May 14 00:00:18.287280 kubelet[3375]: I0514 00:00:18.287020 3375 apiserver.go:52] "Watching apiserver" May 14 00:00:18.330012 kubelet[3375]: I0514 00:00:18.329970 3375 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" May 14 00:00:18.389999 kubelet[3375]: W0514 00:00:18.389955 3375 warnings.go:70] metadata.name: this is used in the Pod's hostname, which can result in surprising behavior; a DNS label is recommended: [must not contain dots] May 14 00:00:18.390147 kubelet[3375]: E0514 00:00:18.390041 3375 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-ci-4284.0.0-n-b62cb48025\" already exists" pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" May 14 00:00:18.440443 kubelet[3375]: I0514 00:00:18.439245 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4284.0.0-n-b62cb48025" podStartSLOduration=1.4392218940000001 podStartE2EDuration="1.439221894s" podCreationTimestamp="2025-05-14 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:18.415921135 +0000 UTC m=+1.195244525" watchObservedRunningTime="2025-05-14 00:00:18.439221894 +0000 UTC m=+1.218545184" May 14 00:00:18.450701 sudo[2166]: pam_unix(sudo:session): session closed for user root May 14 00:00:18.453513 kubelet[3375]: I0514 00:00:18.453460 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4284.0.0-n-b62cb48025" podStartSLOduration=1.453399713 podStartE2EDuration="1.453399713s" podCreationTimestamp="2025-05-14 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:18.4396177 +0000 UTC m=+1.218941090" watchObservedRunningTime="2025-05-14 00:00:18.453399713 +0000 UTC m=+1.232723003" May 14 00:00:18.463791 kubelet[3375]: I0514 00:00:18.463738 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4284.0.0-n-b62cb48025" podStartSLOduration=1.4637227720000001 podStartE2EDuration="1.463722772s" podCreationTimestamp="2025-05-14 00:00:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:18.454083423 +0000 UTC m=+1.233406713" watchObservedRunningTime="2025-05-14 00:00:18.463722772 +0000 UTC m=+1.243046162" May 14 00:00:18.551371 sshd[2165]: Connection closed by 10.200.16.10 port 44702 May 14 00:00:18.552810 sshd-session[2163]: pam_unix(sshd:session): session closed for user core May 14 00:00:18.555997 systemd[1]: sshd@4-10.200.8.49:22-10.200.16.10:44702.service: Deactivated successfully. May 14 00:00:18.558348 systemd[1]: session-7.scope: Deactivated successfully. May 14 00:00:18.558658 systemd[1]: session-7.scope: Consumed 3.875s CPU time, 242.3M memory peak. May 14 00:00:18.560838 systemd-logind[1697]: Session 7 logged out. Waiting for processes to exit. May 14 00:00:18.562055 systemd-logind[1697]: Removed session 7. May 14 00:00:27.969063 kubelet[3375]: I0514 00:00:27.969022 3375 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" May 14 00:00:27.969716 containerd[1723]: time="2025-05-14T00:00:27.969517818Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." May 14 00:00:27.970059 kubelet[3375]: I0514 00:00:27.969787 3375 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" May 14 00:00:28.540941 kubelet[3375]: I0514 00:00:28.540383 3375 topology_manager.go:215] "Topology Admit Handler" podUID="91f78eea-c8fd-41e5-afcb-ce14bf2cfa56" podNamespace="kube-system" podName="kube-proxy-v6f4x" May 14 00:00:28.550570 systemd[1]: Created slice kubepods-besteffort-pod91f78eea_c8fd_41e5_afcb_ce14bf2cfa56.slice - libcontainer container kubepods-besteffort-pod91f78eea_c8fd_41e5_afcb_ce14bf2cfa56.slice. May 14 00:00:28.555218 kubelet[3375]: I0514 00:00:28.553078 3375 topology_manager.go:215] "Topology Admit Handler" podUID="0cc85e76-1769-42c4-b307-0ac597ff20c5" podNamespace="kube-flannel" podName="kube-flannel-ds-q9c8x" May 14 00:00:28.571852 systemd[1]: Created slice kubepods-burstable-pod0cc85e76_1769_42c4_b307_0ac597ff20c5.slice - libcontainer container kubepods-burstable-pod0cc85e76_1769_42c4_b307_0ac597ff20c5.slice. May 14 00:00:28.703591 kubelet[3375]: I0514 00:00:28.703529 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/91f78eea-c8fd-41e5-afcb-ce14bf2cfa56-lib-modules\") pod \"kube-proxy-v6f4x\" (UID: \"91f78eea-c8fd-41e5-afcb-ce14bf2cfa56\") " pod="kube-system/kube-proxy-v6f4x" May 14 00:00:28.703591 kubelet[3375]: I0514 00:00:28.703568 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zstvt\" (UniqueName: \"kubernetes.io/projected/91f78eea-c8fd-41e5-afcb-ce14bf2cfa56-kube-api-access-zstvt\") pod \"kube-proxy-v6f4x\" (UID: \"91f78eea-c8fd-41e5-afcb-ce14bf2cfa56\") " pod="kube-system/kube-proxy-v6f4x" May 14 00:00:28.703591 kubelet[3375]: I0514 00:00:28.703594 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0cc85e76-1769-42c4-b307-0ac597ff20c5-run\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.703954 kubelet[3375]: I0514 00:00:28.703630 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/0cc85e76-1769-42c4-b307-0ac597ff20c5-xtables-lock\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.703954 kubelet[3375]: I0514 00:00:28.703656 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-plugin\" (UniqueName: \"kubernetes.io/host-path/0cc85e76-1769-42c4-b307-0ac597ff20c5-cni-plugin\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.703954 kubelet[3375]: I0514 00:00:28.703678 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/91f78eea-c8fd-41e5-afcb-ce14bf2cfa56-kube-proxy\") pod \"kube-proxy-v6f4x\" (UID: \"91f78eea-c8fd-41e5-afcb-ce14bf2cfa56\") " pod="kube-system/kube-proxy-v6f4x" May 14 00:00:28.703954 kubelet[3375]: I0514 00:00:28.703699 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/91f78eea-c8fd-41e5-afcb-ce14bf2cfa56-xtables-lock\") pod \"kube-proxy-v6f4x\" (UID: \"91f78eea-c8fd-41e5-afcb-ce14bf2cfa56\") " pod="kube-system/kube-proxy-v6f4x" May 14 00:00:28.703954 kubelet[3375]: I0514 00:00:28.703717 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni\" (UniqueName: \"kubernetes.io/host-path/0cc85e76-1769-42c4-b307-0ac597ff20c5-cni\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.704116 kubelet[3375]: I0514 00:00:28.703741 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/0cc85e76-1769-42c4-b307-0ac597ff20c5-flannel-cfg\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.704116 kubelet[3375]: I0514 00:00:28.703765 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cgdsk\" (UniqueName: \"kubernetes.io/projected/0cc85e76-1769-42c4-b307-0ac597ff20c5-kube-api-access-cgdsk\") pod \"kube-flannel-ds-q9c8x\" (UID: \"0cc85e76-1769-42c4-b307-0ac597ff20c5\") " pod="kube-flannel/kube-flannel-ds-q9c8x" May 14 00:00:28.864792 containerd[1723]: time="2025-05-14T00:00:28.864655789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6f4x,Uid:91f78eea-c8fd-41e5-afcb-ce14bf2cfa56,Namespace:kube-system,Attempt:0,}" May 14 00:00:28.879367 containerd[1723]: time="2025-05-14T00:00:28.879322505Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q9c8x,Uid:0cc85e76-1769-42c4-b307-0ac597ff20c5,Namespace:kube-flannel,Attempt:0,}" May 14 00:00:28.931311 containerd[1723]: time="2025-05-14T00:00:28.931256569Z" level=info msg="connecting to shim f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888" address="unix:///run/containerd/s/1544117b1ac625a689316c6e2884780a77763eedaf71a3682e199f2e475853e4" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:28.966022 containerd[1723]: time="2025-05-14T00:00:28.965883579Z" level=info msg="connecting to shim ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39" address="unix:///run/containerd/s/8578182ac73c6db94880f572aa197a9eeff3249d623bc4b754cf63447dc1ffac" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:28.966650 systemd[1]: Started cri-containerd-f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888.scope - libcontainer container f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888. May 14 00:00:29.006028 systemd[1]: Started cri-containerd-ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39.scope - libcontainer container ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39. May 14 00:00:29.014079 containerd[1723]: time="2025-05-14T00:00:29.014030687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-v6f4x,Uid:91f78eea-c8fd-41e5-afcb-ce14bf2cfa56,Namespace:kube-system,Attempt:0,} returns sandbox id \"f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888\"" May 14 00:00:29.019864 containerd[1723]: time="2025-05-14T00:00:29.019826673Z" level=info msg="CreateContainer within sandbox \"f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" May 14 00:00:29.050805 containerd[1723]: time="2025-05-14T00:00:29.050761628Z" level=info msg="Container f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:29.073159 containerd[1723]: time="2025-05-14T00:00:29.073054256Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-flannel-ds-q9c8x,Uid:0cc85e76-1769-42c4-b307-0ac597ff20c5,Namespace:kube-flannel,Attempt:0,} returns sandbox id \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\"" May 14 00:00:29.075224 containerd[1723]: time="2025-05-14T00:00:29.075118686Z" level=info msg="CreateContainer within sandbox \"f8f44349138151143696f51195634ce3eff4112811f2fcf5a763af6c24a6a888\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7\"" May 14 00:00:29.075762 containerd[1723]: time="2025-05-14T00:00:29.075546092Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\"" May 14 00:00:29.076438 containerd[1723]: time="2025-05-14T00:00:29.075997999Z" level=info msg="StartContainer for \"f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7\"" May 14 00:00:29.078184 containerd[1723]: time="2025-05-14T00:00:29.078158131Z" level=info msg="connecting to shim f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7" address="unix:///run/containerd/s/1544117b1ac625a689316c6e2884780a77763eedaf71a3682e199f2e475853e4" protocol=ttrpc version=3 May 14 00:00:29.101595 systemd[1]: Started cri-containerd-f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7.scope - libcontainer container f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7. May 14 00:00:29.144363 containerd[1723]: time="2025-05-14T00:00:29.144246503Z" level=info msg="StartContainer for \"f8f7103a6501e0b6ed26693dc04a028f3f8b7d4cfe550c9380d48002ec7454b7\" returns successfully" May 14 00:00:29.414754 kubelet[3375]: I0514 00:00:29.413686 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-v6f4x" podStartSLOduration=1.413666068 podStartE2EDuration="1.413666068s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:29.413557666 +0000 UTC m=+12.192880956" watchObservedRunningTime="2025-05-14 00:00:29.413666068 +0000 UTC m=+12.192989358" May 14 00:00:31.006253 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3582446869.mount: Deactivated successfully. May 14 00:00:31.090359 containerd[1723]: time="2025-05-14T00:00:31.090298938Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin:v1.1.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.093251 containerd[1723]: time="2025-05-14T00:00:31.093184880Z" level=info msg="stop pulling image docker.io/flannel/flannel-cni-plugin:v1.1.2: active requests=0, bytes read=3852937" May 14 00:00:31.098520 containerd[1723]: time="2025-05-14T00:00:31.098474158Z" level=info msg="ImageCreate event name:\"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.103526 containerd[1723]: time="2025-05-14T00:00:31.103466731Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:31.104235 containerd[1723]: time="2025-05-14T00:00:31.104085740Z" level=info msg="Pulled image \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" with image id \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\", repo tag \"docker.io/flannel/flannel-cni-plugin:v1.1.2\", repo digest \"docker.io/flannel/flannel-cni-plugin@sha256:bf4b62b131666d040f35a327d906ee5a3418280b68a88d9b9c7e828057210443\", size \"3842055\" in 2.028505747s" May 14 00:00:31.104235 containerd[1723]: time="2025-05-14T00:00:31.104124941Z" level=info msg="PullImage \"docker.io/flannel/flannel-cni-plugin:v1.1.2\" returns image reference \"sha256:7a2dcab94698c786e7e41360faf8cd0ea2b29952469be75becc34c61902240e0\"" May 14 00:00:31.107347 containerd[1723]: time="2025-05-14T00:00:31.106390974Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for container &ContainerMetadata{Name:install-cni-plugin,Attempt:0,}" May 14 00:00:31.137154 containerd[1723]: time="2025-05-14T00:00:31.137105826Z" level=info msg="Container b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:31.152606 containerd[1723]: time="2025-05-14T00:00:31.152565654Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for &ContainerMetadata{Name:install-cni-plugin,Attempt:0,} returns container id \"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\"" May 14 00:00:31.153863 containerd[1723]: time="2025-05-14T00:00:31.153141562Z" level=info msg="StartContainer for \"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\"" May 14 00:00:31.154356 containerd[1723]: time="2025-05-14T00:00:31.154323480Z" level=info msg="connecting to shim b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87" address="unix:///run/containerd/s/8578182ac73c6db94880f572aa197a9eeff3249d623bc4b754cf63447dc1ffac" protocol=ttrpc version=3 May 14 00:00:31.174571 systemd[1]: Started cri-containerd-b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87.scope - libcontainer container b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87. May 14 00:00:31.199173 systemd[1]: cri-containerd-b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87.scope: Deactivated successfully. May 14 00:00:31.203015 containerd[1723]: time="2025-05-14T00:00:31.202967195Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\" id:\"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\" pid:3699 exited_at:{seconds:1747180831 nanos:202258685}" May 14 00:00:31.203332 containerd[1723]: time="2025-05-14T00:00:31.203297500Z" level=info msg="received exit event container_id:\"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\" id:\"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\" pid:3699 exited_at:{seconds:1747180831 nanos:202258685}" May 14 00:00:31.204687 containerd[1723]: time="2025-05-14T00:00:31.204612120Z" level=info msg="StartContainer for \"b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87\" returns successfully" May 14 00:00:31.408958 containerd[1723]: time="2025-05-14T00:00:31.408912026Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\"" May 14 00:00:31.924737 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b811ac1e3290afbae4de6d31e3bb4e650e29203b1d917b7df427df4121b03f87-rootfs.mount: Deactivated successfully. May 14 00:00:33.325100 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount520717310.mount: Deactivated successfully. May 14 00:00:34.333422 containerd[1723]: time="2025-05-14T00:00:34.333354856Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel:v0.22.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.336155 containerd[1723]: time="2025-05-14T00:00:34.336073896Z" level=info msg="stop pulling image docker.io/flannel/flannel:v0.22.0: active requests=0, bytes read=26866358" May 14 00:00:34.341493 containerd[1723]: time="2025-05-14T00:00:34.341432275Z" level=info msg="ImageCreate event name:\"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.347146 containerd[1723]: time="2025-05-14T00:00:34.347115258Z" level=info msg="ImageCreate event name:\"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" May 14 00:00:34.348106 containerd[1723]: time="2025-05-14T00:00:34.347964671Z" level=info msg="Pulled image \"docker.io/flannel/flannel:v0.22.0\" with image id \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\", repo tag \"docker.io/flannel/flannel:v0.22.0\", repo digest \"docker.io/flannel/flannel@sha256:5f83f1243057458e27249157394e3859cf31cc075354af150d497f2ebc8b54db\", size \"26855532\" in 2.938998445s" May 14 00:00:34.348106 containerd[1723]: time="2025-05-14T00:00:34.348003171Z" level=info msg="PullImage \"docker.io/flannel/flannel:v0.22.0\" returns image reference \"sha256:38c11b8f4aa1904512c0b3e93d34604de20ba24b38d4365d27fe05b7a4ce6f68\"" May 14 00:00:34.350785 containerd[1723]: time="2025-05-14T00:00:34.350333406Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" May 14 00:00:34.373196 containerd[1723]: time="2025-05-14T00:00:34.373156141Z" level=info msg="Container 6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:34.391558 containerd[1723]: time="2025-05-14T00:00:34.391512612Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\"" May 14 00:00:34.393013 containerd[1723]: time="2025-05-14T00:00:34.392069320Z" level=info msg="StartContainer for \"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\"" May 14 00:00:34.393220 containerd[1723]: time="2025-05-14T00:00:34.393194636Z" level=info msg="connecting to shim 6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e" address="unix:///run/containerd/s/8578182ac73c6db94880f572aa197a9eeff3249d623bc4b754cf63447dc1ffac" protocol=ttrpc version=3 May 14 00:00:34.416549 systemd[1]: Started cri-containerd-6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e.scope - libcontainer container 6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e. May 14 00:00:34.441182 systemd[1]: cri-containerd-6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e.scope: Deactivated successfully. May 14 00:00:34.443810 containerd[1723]: time="2025-05-14T00:00:34.443766180Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\" id:\"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\" pid:3769 exited_at:{seconds:1747180834 nanos:443426375}" May 14 00:00:34.447198 containerd[1723]: time="2025-05-14T00:00:34.446642423Z" level=info msg="received exit event container_id:\"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\" id:\"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\" pid:3769 exited_at:{seconds:1747180834 nanos:443426375}" May 14 00:00:34.448417 containerd[1723]: time="2025-05-14T00:00:34.448267547Z" level=info msg="StartContainer for \"6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e\" returns successfully" May 14 00:00:34.465868 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6c693bc2cf9665579fe48a0c39dfd7bd821ac35b806a07e673df452a1b9bc02e-rootfs.mount: Deactivated successfully. May 14 00:00:34.481266 kubelet[3375]: I0514 00:00:34.481026 3375 kubelet_node_status.go:497] "Fast updating node status as it just became ready" May 14 00:00:34.521692 kubelet[3375]: I0514 00:00:34.513924 3375 topology_manager.go:215] "Topology Admit Handler" podUID="ce54856d-b282-4f99-86cb-133510b2d8c9" podNamespace="kube-system" podName="coredns-7db6d8ff4d-rzv5m" May 14 00:00:34.521692 kubelet[3375]: I0514 00:00:34.520093 3375 topology_manager.go:215] "Topology Admit Handler" podUID="371b51b5-a276-4440-82e5-40cbbdbeadd2" podNamespace="kube-system" podName="coredns-7db6d8ff4d-fd2nv" May 14 00:00:34.524480 systemd[1]: Created slice kubepods-burstable-podce54856d_b282_4f99_86cb_133510b2d8c9.slice - libcontainer container kubepods-burstable-podce54856d_b282_4f99_86cb_133510b2d8c9.slice. May 14 00:00:34.540303 systemd[1]: Created slice kubepods-burstable-pod371b51b5_a276_4440_82e5_40cbbdbeadd2.slice - libcontainer container kubepods-burstable-pod371b51b5_a276_4440_82e5_40cbbdbeadd2.slice. May 14 00:00:34.639944 kubelet[3375]: I0514 00:00:34.639651 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce54856d-b282-4f99-86cb-133510b2d8c9-config-volume\") pod \"coredns-7db6d8ff4d-rzv5m\" (UID: \"ce54856d-b282-4f99-86cb-133510b2d8c9\") " pod="kube-system/coredns-7db6d8ff4d-rzv5m" May 14 00:00:34.639944 kubelet[3375]: I0514 00:00:34.639760 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tq2tq\" (UniqueName: \"kubernetes.io/projected/ce54856d-b282-4f99-86cb-133510b2d8c9-kube-api-access-tq2tq\") pod \"coredns-7db6d8ff4d-rzv5m\" (UID: \"ce54856d-b282-4f99-86cb-133510b2d8c9\") " pod="kube-system/coredns-7db6d8ff4d-rzv5m" May 14 00:00:34.639944 kubelet[3375]: I0514 00:00:34.639793 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8bzr6\" (UniqueName: \"kubernetes.io/projected/371b51b5-a276-4440-82e5-40cbbdbeadd2-kube-api-access-8bzr6\") pod \"coredns-7db6d8ff4d-fd2nv\" (UID: \"371b51b5-a276-4440-82e5-40cbbdbeadd2\") " pod="kube-system/coredns-7db6d8ff4d-fd2nv" May 14 00:00:34.639944 kubelet[3375]: I0514 00:00:34.639820 3375 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/371b51b5-a276-4440-82e5-40cbbdbeadd2-config-volume\") pod \"coredns-7db6d8ff4d-fd2nv\" (UID: \"371b51b5-a276-4440-82e5-40cbbdbeadd2\") " pod="kube-system/coredns-7db6d8ff4d-fd2nv" May 14 00:00:34.836169 containerd[1723]: time="2025-05-14T00:00:34.836114555Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzv5m,Uid:ce54856d-b282-4f99-86cb-133510b2d8c9,Namespace:kube-system,Attempt:0,}" May 14 00:00:34.842861 containerd[1723]: time="2025-05-14T00:00:34.842816864Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fd2nv,Uid:371b51b5-a276-4440-82e5-40cbbdbeadd2,Namespace:kube-system,Attempt:0,}" May 14 00:00:35.054484 containerd[1723]: time="2025-05-14T00:00:35.054275494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzv5m,Uid:ce54856d-b282-4f99-86cb-133510b2d8c9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"410dff175b7b03a09ac7b40450cbcb4b5bd8e0d14aec424a71ecbd5808d06a35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:00:35.054866 kubelet[3375]: E0514 00:00:35.054582 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410dff175b7b03a09ac7b40450cbcb4b5bd8e0d14aec424a71ecbd5808d06a35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:00:35.054866 kubelet[3375]: E0514 00:00:35.054678 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410dff175b7b03a09ac7b40450cbcb4b5bd8e0d14aec424a71ecbd5808d06a35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rzv5m" May 14 00:00:35.054866 kubelet[3375]: E0514 00:00:35.054778 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"410dff175b7b03a09ac7b40450cbcb4b5bd8e0d14aec424a71ecbd5808d06a35\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-rzv5m" May 14 00:00:35.055260 kubelet[3375]: E0514 00:00:35.054853 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-rzv5m_kube-system(ce54856d-b282-4f99-86cb-133510b2d8c9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-rzv5m_kube-system(ce54856d-b282-4f99-86cb-133510b2d8c9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"410dff175b7b03a09ac7b40450cbcb4b5bd8e0d14aec424a71ecbd5808d06a35\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-rzv5m" podUID="ce54856d-b282-4f99-86cb-133510b2d8c9" May 14 00:00:35.064555 containerd[1723]: time="2025-05-14T00:00:35.064506660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fd2nv,Uid:371b51b5-a276-4440-82e5-40cbbdbeadd2,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4fcb3d8cc35a77cfda7c93c090cc7192ac8c06c25fd0508ca5922d876c910a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:00:35.064798 kubelet[3375]: E0514 00:00:35.064757 3375 remote_runtime.go:193] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4fcb3d8cc35a77cfda7c93c090cc7192ac8c06c25fd0508ca5922d876c910a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" May 14 00:00:35.064896 kubelet[3375]: E0514 00:00:35.064827 3375 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4fcb3d8cc35a77cfda7c93c090cc7192ac8c06c25fd0508ca5922d876c910a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-fd2nv" May 14 00:00:35.064896 kubelet[3375]: E0514 00:00:35.064850 3375 kuberuntime_manager.go:1166] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa4fcb3d8cc35a77cfda7c93c090cc7192ac8c06c25fd0508ca5922d876c910a\": plugin type=\"flannel\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory" pod="kube-system/coredns-7db6d8ff4d-fd2nv" May 14 00:00:35.064989 kubelet[3375]: E0514 00:00:35.064916 3375 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7db6d8ff4d-fd2nv_kube-system(371b51b5-a276-4440-82e5-40cbbdbeadd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7db6d8ff4d-fd2nv_kube-system(371b51b5-a276-4440-82e5-40cbbdbeadd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa4fcb3d8cc35a77cfda7c93c090cc7192ac8c06c25fd0508ca5922d876c910a\\\": plugin type=\\\"flannel\\\" failed (add): loadFlannelSubnetEnv failed: open /run/flannel/subnet.env: no such file or directory\"" pod="kube-system/coredns-7db6d8ff4d-fd2nv" podUID="371b51b5-a276-4440-82e5-40cbbdbeadd2" May 14 00:00:35.424681 containerd[1723]: time="2025-05-14T00:00:35.423057076Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for container &ContainerMetadata{Name:kube-flannel,Attempt:0,}" May 14 00:00:35.442753 containerd[1723]: time="2025-05-14T00:00:35.442697895Z" level=info msg="Container 624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:35.460030 containerd[1723]: time="2025-05-14T00:00:35.459990475Z" level=info msg="CreateContainer within sandbox \"ecbac0a57137f70dc819eed45090a92429e04fc70e5eebbbd72ba4ec606e7c39\" for &ContainerMetadata{Name:kube-flannel,Attempt:0,} returns container id \"624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1\"" May 14 00:00:35.460676 containerd[1723]: time="2025-05-14T00:00:35.460531684Z" level=info msg="StartContainer for \"624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1\"" May 14 00:00:35.462116 containerd[1723]: time="2025-05-14T00:00:35.462052809Z" level=info msg="connecting to shim 624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1" address="unix:///run/containerd/s/8578182ac73c6db94880f572aa197a9eeff3249d623bc4b754cf63447dc1ffac" protocol=ttrpc version=3 May 14 00:00:35.482586 systemd[1]: Started cri-containerd-624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1.scope - libcontainer container 624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1. May 14 00:00:35.513901 containerd[1723]: time="2025-05-14T00:00:35.513857549Z" level=info msg="StartContainer for \"624cdcbfc26cf6e4fe5a85a936e02d11ed722e4de82d8007f6748eb7994098a1\" returns successfully" May 14 00:00:36.582670 systemd-networkd[1610]: flannel.1: Link UP May 14 00:00:36.582680 systemd-networkd[1610]: flannel.1: Gained carrier May 14 00:00:37.344891 kubelet[3375]: I0514 00:00:37.344823 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-flannel/kube-flannel-ds-q9c8x" podStartSLOduration=4.070575243 podStartE2EDuration="9.344804548s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="2025-05-14 00:00:29.07471148 +0000 UTC m=+11.854034770" lastFinishedPulling="2025-05-14 00:00:34.348940785 +0000 UTC m=+17.128264075" observedRunningTime="2025-05-14 00:00:36.449367023 +0000 UTC m=+19.228690413" watchObservedRunningTime="2025-05-14 00:00:37.344804548 +0000 UTC m=+20.124127938" May 14 00:00:38.327580 systemd-networkd[1610]: flannel.1: Gained IPv6LL May 14 00:00:47.325130 containerd[1723]: time="2025-05-14T00:00:47.324680963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzv5m,Uid:ce54856d-b282-4f99-86cb-133510b2d8c9,Namespace:kube-system,Attempt:0,}" May 14 00:00:47.347362 systemd-networkd[1610]: cni0: Link UP May 14 00:00:47.347371 systemd-networkd[1610]: cni0: Gained carrier May 14 00:00:47.351057 systemd-networkd[1610]: cni0: Lost carrier May 14 00:00:47.432417 systemd-networkd[1610]: veth9df35212: Link UP May 14 00:00:47.437119 kernel: cni0: port 1(veth9df35212) entered blocking state May 14 00:00:47.437283 kernel: cni0: port 1(veth9df35212) entered disabled state May 14 00:00:47.438313 kernel: veth9df35212: entered allmulticast mode May 14 00:00:47.440375 kernel: veth9df35212: entered promiscuous mode May 14 00:00:47.440617 kernel: cni0: port 1(veth9df35212) entered blocking state May 14 00:00:47.444479 kernel: cni0: port 1(veth9df35212) entered forwarding state May 14 00:00:47.444535 kernel: cni0: port 1(veth9df35212) entered disabled state May 14 00:00:47.455232 kernel: cni0: port 1(veth9df35212) entered blocking state May 14 00:00:47.455311 kernel: cni0: port 1(veth9df35212) entered forwarding state May 14 00:00:47.455517 systemd-networkd[1610]: veth9df35212: Gained carrier May 14 00:00:47.456114 systemd-networkd[1610]: cni0: Gained carrier May 14 00:00:47.457661 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009a8e8), "name":"cbr0", "type":"bridge"} May 14 00:00:47.457661 containerd[1723]: delegateAdd: netconf sent to delegate plugin: May 14 00:00:47.515625 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:00:47.515572402Z" level=info msg="connecting to shim 12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f" address="unix:///run/containerd/s/9845809bb45a68d5c245fce2effe1733864b6498fd779e4a6c65c1c174d85794" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:47.541596 systemd[1]: Started cri-containerd-12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f.scope - libcontainer container 12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f. May 14 00:00:47.591382 containerd[1723]: time="2025-05-14T00:00:47.590975903Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-rzv5m,Uid:ce54856d-b282-4f99-86cb-133510b2d8c9,Namespace:kube-system,Attempt:0,} returns sandbox id \"12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f\"" May 14 00:00:47.594309 containerd[1723]: time="2025-05-14T00:00:47.594261355Z" level=info msg="CreateContainer within sandbox \"12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:47.624388 containerd[1723]: time="2025-05-14T00:00:47.623564122Z" level=info msg="Container c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:47.647762 containerd[1723]: time="2025-05-14T00:00:47.647721306Z" level=info msg="CreateContainer within sandbox \"12b0bcae0980fe8c951ca7beb6e2c68daf61471f7bc4d270a8438a40792ab76f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e\"" May 14 00:00:47.648314 containerd[1723]: time="2025-05-14T00:00:47.648229714Z" level=info msg="StartContainer for \"c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e\"" May 14 00:00:47.649437 containerd[1723]: time="2025-05-14T00:00:47.649339532Z" level=info msg="connecting to shim c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e" address="unix:///run/containerd/s/9845809bb45a68d5c245fce2effe1733864b6498fd779e4a6c65c1c174d85794" protocol=ttrpc version=3 May 14 00:00:47.669588 systemd[1]: Started cri-containerd-c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e.scope - libcontainer container c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e. May 14 00:00:47.701461 containerd[1723]: time="2025-05-14T00:00:47.700270743Z" level=info msg="StartContainer for \"c4bebc4379678e7c7546ad6a9e40ce781b02bbcf2716a45c9774c26bd221468e\" returns successfully" May 14 00:00:48.324813 containerd[1723]: time="2025-05-14T00:00:48.324755085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fd2nv,Uid:371b51b5-a276-4440-82e5-40cbbdbeadd2,Namespace:kube-system,Attempt:0,}" May 14 00:00:48.338819 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount82883257.mount: Deactivated successfully. May 14 00:00:48.360578 kernel: cni0: port 2(veth6b7bd30b) entered blocking state May 14 00:00:48.360691 kernel: cni0: port 2(veth6b7bd30b) entered disabled state May 14 00:00:48.360722 kernel: veth6b7bd30b: entered allmulticast mode May 14 00:00:48.362393 kernel: veth6b7bd30b: entered promiscuous mode May 14 00:00:48.365503 systemd-networkd[1610]: veth6b7bd30b: Link UP May 14 00:00:48.373764 kernel: cni0: port 2(veth6b7bd30b) entered blocking state May 14 00:00:48.373849 kernel: cni0: port 2(veth6b7bd30b) entered forwarding state May 14 00:00:48.374044 systemd-networkd[1610]: veth6b7bd30b: Gained carrier May 14 00:00:48.375776 containerd[1723]: map[string]interface {}{"cniVersion":"0.3.1", "hairpinMode":true, "ipMasq":false, "ipam":map[string]interface {}{"ranges":[][]map[string]interface {}{[]map[string]interface {}{map[string]interface {}{"subnet":"192.168.0.0/24"}}}, "routes":[]types.Route{types.Route{Dst:net.IPNet{IP:net.IP{0xc0, 0xa8, 0x0, 0x0}, Mask:net.IPMask{0xff, 0xff, 0x80, 0x0}}, GW:net.IP(nil)}}, "type":"host-local"}, "isDefaultGateway":true, "isGateway":true, "mtu":(*uint)(0xc00009c8e8), "name":"cbr0", "type":"bridge"} May 14 00:00:48.375776 containerd[1723]: delegateAdd: netconf sent to delegate plugin: May 14 00:00:48.450264 containerd[1723]: {"cniVersion":"0.3.1","hairpinMode":true,"ipMasq":false,"ipam":{"ranges":[[{"subnet":"192.168.0.0/24"}]],"routes":[{"dst":"192.168.0.0/17"}],"type":"host-local"},"isDefaultGateway":true,"isGateway":true,"mtu":1450,"name":"cbr0","type":"bridge"}time="2025-05-14T00:00:48.450174282Z" level=info msg="connecting to shim 42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc" address="unix:///run/containerd/s/cc10ee2a07ee0d90122f8a4a94c5b0eebd222e039f46eb15bfc0611a2c4f3221" namespace=k8s.io protocol=ttrpc version=3 May 14 00:00:48.484607 kubelet[3375]: I0514 00:00:48.484518 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-rzv5m" podStartSLOduration=20.484497628 podStartE2EDuration="20.484497628s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:48.483712716 +0000 UTC m=+31.263036006" watchObservedRunningTime="2025-05-14 00:00:48.484497628 +0000 UTC m=+31.263820918" May 14 00:00:48.496011 systemd[1]: Started cri-containerd-42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc.scope - libcontainer container 42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc. May 14 00:00:48.503580 systemd-networkd[1610]: cni0: Gained IPv6LL May 14 00:00:48.559468 containerd[1723]: time="2025-05-14T00:00:48.559326219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7db6d8ff4d-fd2nv,Uid:371b51b5-a276-4440-82e5-40cbbdbeadd2,Namespace:kube-system,Attempt:0,} returns sandbox id \"42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc\"" May 14 00:00:48.565313 containerd[1723]: time="2025-05-14T00:00:48.563827491Z" level=info msg="CreateContainer within sandbox \"42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" May 14 00:00:48.589151 containerd[1723]: time="2025-05-14T00:00:48.589055593Z" level=info msg="Container 059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4: CDI devices from CRI Config.CDIDevices: []" May 14 00:00:48.608090 containerd[1723]: time="2025-05-14T00:00:48.608047795Z" level=info msg="CreateContainer within sandbox \"42103f162b40e65b2cc46d060c66b46367ef4b8f9cd8cf94500a48402240c0dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4\"" May 14 00:00:48.608720 containerd[1723]: time="2025-05-14T00:00:48.608695905Z" level=info msg="StartContainer for \"059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4\"" May 14 00:00:48.610087 containerd[1723]: time="2025-05-14T00:00:48.609603920Z" level=info msg="connecting to shim 059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4" address="unix:///run/containerd/s/cc10ee2a07ee0d90122f8a4a94c5b0eebd222e039f46eb15bfc0611a2c4f3221" protocol=ttrpc version=3 May 14 00:00:48.628636 systemd[1]: Started cri-containerd-059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4.scope - libcontainer container 059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4. May 14 00:00:48.658283 containerd[1723]: time="2025-05-14T00:00:48.658225794Z" level=info msg="StartContainer for \"059096409fd02a735fd0a348ecc5146729c5b20282cf8fc438ed19aa228d69b4\" returns successfully" May 14 00:00:49.335584 systemd-networkd[1610]: veth9df35212: Gained IPv6LL May 14 00:00:49.719571 systemd-networkd[1610]: veth6b7bd30b: Gained IPv6LL May 14 00:00:54.857433 kubelet[3375]: I0514 00:00:54.855522 3375 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-fd2nv" podStartSLOduration=26.855498135 podStartE2EDuration="26.855498135s" podCreationTimestamp="2025-05-14 00:00:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-05-14 00:00:49.488421611 +0000 UTC m=+32.267744901" watchObservedRunningTime="2025-05-14 00:00:54.855498135 +0000 UTC m=+37.634821525" May 14 00:01:52.462714 systemd[1]: Started sshd@5-10.200.8.49:22-10.200.16.10:54716.service - OpenSSH per-connection server daemon (10.200.16.10:54716). May 14 00:01:53.093705 sshd[4500]: Accepted publickey for core from 10.200.16.10 port 54716 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:53.095202 sshd-session[4500]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:53.099495 systemd-logind[1697]: New session 8 of user core. May 14 00:01:53.104588 systemd[1]: Started session-8.scope - Session 8 of User core. May 14 00:01:53.600813 sshd[4502]: Connection closed by 10.200.16.10 port 54716 May 14 00:01:53.601828 sshd-session[4500]: pam_unix(sshd:session): session closed for user core May 14 00:01:53.605385 systemd[1]: sshd@5-10.200.8.49:22-10.200.16.10:54716.service: Deactivated successfully. May 14 00:01:53.608082 systemd[1]: session-8.scope: Deactivated successfully. May 14 00:01:53.610052 systemd-logind[1697]: Session 8 logged out. Waiting for processes to exit. May 14 00:01:53.611317 systemd-logind[1697]: Removed session 8. May 14 00:01:58.713264 systemd[1]: Started sshd@6-10.200.8.49:22-10.200.16.10:59256.service - OpenSSH per-connection server daemon (10.200.16.10:59256). May 14 00:01:59.344861 sshd[4536]: Accepted publickey for core from 10.200.16.10 port 59256 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:01:59.346289 sshd-session[4536]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:01:59.350576 systemd-logind[1697]: New session 9 of user core. May 14 00:01:59.357579 systemd[1]: Started session-9.scope - Session 9 of User core. May 14 00:01:59.844568 sshd[4540]: Connection closed by 10.200.16.10 port 59256 May 14 00:01:59.845539 sshd-session[4536]: pam_unix(sshd:session): session closed for user core May 14 00:01:59.850387 systemd[1]: sshd@6-10.200.8.49:22-10.200.16.10:59256.service: Deactivated successfully. May 14 00:01:59.853024 systemd[1]: session-9.scope: Deactivated successfully. May 14 00:01:59.854069 systemd-logind[1697]: Session 9 logged out. Waiting for processes to exit. May 14 00:01:59.855270 systemd-logind[1697]: Removed session 9. May 14 00:02:04.961686 systemd[1]: Started sshd@7-10.200.8.49:22-10.200.16.10:59266.service - OpenSSH per-connection server daemon (10.200.16.10:59266). May 14 00:02:05.592973 sshd[4574]: Accepted publickey for core from 10.200.16.10 port 59266 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:05.594757 sshd-session[4574]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:05.599153 systemd-logind[1697]: New session 10 of user core. May 14 00:02:05.604590 systemd[1]: Started session-10.scope - Session 10 of User core. May 14 00:02:06.091432 sshd[4576]: Connection closed by 10.200.16.10 port 59266 May 14 00:02:06.092193 sshd-session[4574]: pam_unix(sshd:session): session closed for user core May 14 00:02:06.095154 systemd[1]: sshd@7-10.200.8.49:22-10.200.16.10:59266.service: Deactivated successfully. May 14 00:02:06.097476 systemd[1]: session-10.scope: Deactivated successfully. May 14 00:02:06.099335 systemd-logind[1697]: Session 10 logged out. Waiting for processes to exit. May 14 00:02:06.100318 systemd-logind[1697]: Removed session 10. May 14 00:02:06.228805 systemd[1]: Started sshd@8-10.200.8.49:22-10.200.16.10:59278.service - OpenSSH per-connection server daemon (10.200.16.10:59278). May 14 00:02:06.864278 sshd[4589]: Accepted publickey for core from 10.200.16.10 port 59278 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:06.865642 sshd-session[4589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:06.870595 systemd-logind[1697]: New session 11 of user core. May 14 00:02:06.875577 systemd[1]: Started session-11.scope - Session 11 of User core. May 14 00:02:07.393812 sshd[4597]: Connection closed by 10.200.16.10 port 59278 May 14 00:02:07.394563 sshd-session[4589]: pam_unix(sshd:session): session closed for user core May 14 00:02:07.397974 systemd[1]: sshd@8-10.200.8.49:22-10.200.16.10:59278.service: Deactivated successfully. May 14 00:02:07.400358 systemd[1]: session-11.scope: Deactivated successfully. May 14 00:02:07.402262 systemd-logind[1697]: Session 11 logged out. Waiting for processes to exit. May 14 00:02:07.403425 systemd-logind[1697]: Removed session 11. May 14 00:02:07.505614 systemd[1]: Started sshd@9-10.200.8.49:22-10.200.16.10:59286.service - OpenSSH per-connection server daemon (10.200.16.10:59286). May 14 00:02:08.135637 sshd[4622]: Accepted publickey for core from 10.200.16.10 port 59286 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:08.137272 sshd-session[4622]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:08.143054 systemd-logind[1697]: New session 12 of user core. May 14 00:02:08.148564 systemd[1]: Started session-12.scope - Session 12 of User core. May 14 00:02:08.636300 sshd[4624]: Connection closed by 10.200.16.10 port 59286 May 14 00:02:08.637145 sshd-session[4622]: pam_unix(sshd:session): session closed for user core May 14 00:02:08.641657 systemd[1]: sshd@9-10.200.8.49:22-10.200.16.10:59286.service: Deactivated successfully. May 14 00:02:08.643984 systemd[1]: session-12.scope: Deactivated successfully. May 14 00:02:08.645017 systemd-logind[1697]: Session 12 logged out. Waiting for processes to exit. May 14 00:02:08.646375 systemd-logind[1697]: Removed session 12. May 14 00:02:13.748914 systemd[1]: Started sshd@10-10.200.8.49:22-10.200.16.10:41998.service - OpenSSH per-connection server daemon (10.200.16.10:41998). May 14 00:02:14.382493 sshd[4657]: Accepted publickey for core from 10.200.16.10 port 41998 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:14.383923 sshd-session[4657]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:14.389157 systemd-logind[1697]: New session 13 of user core. May 14 00:02:14.392564 systemd[1]: Started session-13.scope - Session 13 of User core. May 14 00:02:14.881147 sshd[4659]: Connection closed by 10.200.16.10 port 41998 May 14 00:02:14.881998 sshd-session[4657]: pam_unix(sshd:session): session closed for user core May 14 00:02:14.885536 systemd[1]: sshd@10-10.200.8.49:22-10.200.16.10:41998.service: Deactivated successfully. May 14 00:02:14.888153 systemd[1]: session-13.scope: Deactivated successfully. May 14 00:02:14.890177 systemd-logind[1697]: Session 13 logged out. Waiting for processes to exit. May 14 00:02:14.891193 systemd-logind[1697]: Removed session 13. May 14 00:02:19.996666 systemd[1]: Started sshd@11-10.200.8.49:22-10.200.16.10:44804.service - OpenSSH per-connection server daemon (10.200.16.10:44804). May 14 00:02:20.627112 sshd[4694]: Accepted publickey for core from 10.200.16.10 port 44804 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:20.628821 sshd-session[4694]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:20.633914 systemd-logind[1697]: New session 14 of user core. May 14 00:02:20.638587 systemd[1]: Started session-14.scope - Session 14 of User core. May 14 00:02:21.126026 sshd[4696]: Connection closed by 10.200.16.10 port 44804 May 14 00:02:21.126966 sshd-session[4694]: pam_unix(sshd:session): session closed for user core May 14 00:02:21.130629 systemd[1]: sshd@11-10.200.8.49:22-10.200.16.10:44804.service: Deactivated successfully. May 14 00:02:21.133102 systemd[1]: session-14.scope: Deactivated successfully. May 14 00:02:21.135236 systemd-logind[1697]: Session 14 logged out. Waiting for processes to exit. May 14 00:02:21.136228 systemd-logind[1697]: Removed session 14. May 14 00:02:21.241652 systemd[1]: Started sshd@12-10.200.8.49:22-10.200.16.10:44814.service - OpenSSH per-connection server daemon (10.200.16.10:44814). May 14 00:02:21.875230 sshd[4708]: Accepted publickey for core from 10.200.16.10 port 44814 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:21.875922 sshd-session[4708]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:21.882677 systemd-logind[1697]: New session 15 of user core. May 14 00:02:21.890547 systemd[1]: Started session-15.scope - Session 15 of User core. May 14 00:02:22.445927 sshd[4716]: Connection closed by 10.200.16.10 port 44814 May 14 00:02:22.446725 sshd-session[4708]: pam_unix(sshd:session): session closed for user core May 14 00:02:22.450661 systemd[1]: sshd@12-10.200.8.49:22-10.200.16.10:44814.service: Deactivated successfully. May 14 00:02:22.452803 systemd[1]: session-15.scope: Deactivated successfully. May 14 00:02:22.453694 systemd-logind[1697]: Session 15 logged out. Waiting for processes to exit. May 14 00:02:22.454731 systemd-logind[1697]: Removed session 15. May 14 00:02:22.558565 systemd[1]: Started sshd@13-10.200.8.49:22-10.200.16.10:44816.service - OpenSSH per-connection server daemon (10.200.16.10:44816). May 14 00:02:23.193905 sshd[4740]: Accepted publickey for core from 10.200.16.10 port 44816 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:23.195667 sshd-session[4740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:23.200099 systemd-logind[1697]: New session 16 of user core. May 14 00:02:23.204593 systemd[1]: Started session-16.scope - Session 16 of User core. May 14 00:02:25.026149 sshd[4742]: Connection closed by 10.200.16.10 port 44816 May 14 00:02:25.027127 sshd-session[4740]: pam_unix(sshd:session): session closed for user core May 14 00:02:25.031282 systemd[1]: sshd@13-10.200.8.49:22-10.200.16.10:44816.service: Deactivated successfully. May 14 00:02:25.033892 systemd[1]: session-16.scope: Deactivated successfully. May 14 00:02:25.034920 systemd-logind[1697]: Session 16 logged out. Waiting for processes to exit. May 14 00:02:25.036026 systemd-logind[1697]: Removed session 16. May 14 00:02:25.142957 systemd[1]: Started sshd@14-10.200.8.49:22-10.200.16.10:44818.service - OpenSSH per-connection server daemon (10.200.16.10:44818). May 14 00:02:25.778378 sshd[4758]: Accepted publickey for core from 10.200.16.10 port 44818 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:25.780153 sshd-session[4758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:25.784660 systemd-logind[1697]: New session 17 of user core. May 14 00:02:25.790591 systemd[1]: Started session-17.scope - Session 17 of User core. May 14 00:02:26.384608 sshd[4760]: Connection closed by 10.200.16.10 port 44818 May 14 00:02:26.385546 sshd-session[4758]: pam_unix(sshd:session): session closed for user core May 14 00:02:26.388491 systemd[1]: sshd@14-10.200.8.49:22-10.200.16.10:44818.service: Deactivated successfully. May 14 00:02:26.390661 systemd[1]: session-17.scope: Deactivated successfully. May 14 00:02:26.392307 systemd-logind[1697]: Session 17 logged out. Waiting for processes to exit. May 14 00:02:26.393380 systemd-logind[1697]: Removed session 17. May 14 00:02:26.498572 systemd[1]: Started sshd@15-10.200.8.49:22-10.200.16.10:44824.service - OpenSSH per-connection server daemon (10.200.16.10:44824). May 14 00:02:27.138558 sshd[4769]: Accepted publickey for core from 10.200.16.10 port 44824 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:27.139937 sshd-session[4769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:27.144071 systemd-logind[1697]: New session 18 of user core. May 14 00:02:27.150580 systemd[1]: Started session-18.scope - Session 18 of User core. May 14 00:02:27.638234 sshd[4792]: Connection closed by 10.200.16.10 port 44824 May 14 00:02:27.639042 sshd-session[4769]: pam_unix(sshd:session): session closed for user core May 14 00:02:27.643349 systemd-logind[1697]: Session 18 logged out. Waiting for processes to exit. May 14 00:02:27.645726 systemd[1]: sshd@15-10.200.8.49:22-10.200.16.10:44824.service: Deactivated successfully. May 14 00:02:27.649502 systemd[1]: session-18.scope: Deactivated successfully. May 14 00:02:27.651252 systemd-logind[1697]: Removed session 18. May 14 00:02:32.750172 systemd[1]: Started sshd@16-10.200.8.49:22-10.200.16.10:36556.service - OpenSSH per-connection server daemon (10.200.16.10:36556). May 14 00:02:33.383745 sshd[4830]: Accepted publickey for core from 10.200.16.10 port 36556 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:33.385447 sshd-session[4830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:33.390136 systemd-logind[1697]: New session 19 of user core. May 14 00:02:33.397566 systemd[1]: Started session-19.scope - Session 19 of User core. May 14 00:02:33.882453 sshd[4832]: Connection closed by 10.200.16.10 port 36556 May 14 00:02:33.883278 sshd-session[4830]: pam_unix(sshd:session): session closed for user core May 14 00:02:33.887840 systemd[1]: sshd@16-10.200.8.49:22-10.200.16.10:36556.service: Deactivated successfully. May 14 00:02:33.890030 systemd[1]: session-19.scope: Deactivated successfully. May 14 00:02:33.890993 systemd-logind[1697]: Session 19 logged out. Waiting for processes to exit. May 14 00:02:33.891990 systemd-logind[1697]: Removed session 19. May 14 00:02:38.995667 systemd[1]: Started sshd@17-10.200.8.49:22-10.200.16.10:47356.service - OpenSSH per-connection server daemon (10.200.16.10:47356). May 14 00:02:39.626451 sshd[4865]: Accepted publickey for core from 10.200.16.10 port 47356 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:39.628185 sshd-session[4865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:39.632752 systemd-logind[1697]: New session 20 of user core. May 14 00:02:39.639572 systemd[1]: Started session-20.scope - Session 20 of User core. May 14 00:02:40.124311 sshd[4867]: Connection closed by 10.200.16.10 port 47356 May 14 00:02:40.125176 sshd-session[4865]: pam_unix(sshd:session): session closed for user core May 14 00:02:40.129468 systemd[1]: sshd@17-10.200.8.49:22-10.200.16.10:47356.service: Deactivated successfully. May 14 00:02:40.131541 systemd[1]: session-20.scope: Deactivated successfully. May 14 00:02:40.132401 systemd-logind[1697]: Session 20 logged out. Waiting for processes to exit. May 14 00:02:40.133376 systemd-logind[1697]: Removed session 20. May 14 00:02:45.237831 systemd[1]: Started sshd@18-10.200.8.49:22-10.200.16.10:47372.service - OpenSSH per-connection server daemon (10.200.16.10:47372). May 14 00:02:45.872037 sshd[4899]: Accepted publickey for core from 10.200.16.10 port 47372 ssh2: RSA SHA256:kdsm4aPxgwFO/vR4uHEnGUnhOKZ6XU57pxl25IkKi98 May 14 00:02:45.873528 sshd-session[4899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) May 14 00:02:45.877928 systemd-logind[1697]: New session 21 of user core. May 14 00:02:45.880615 systemd[1]: Started session-21.scope - Session 21 of User core. May 14 00:02:46.370141 sshd[4901]: Connection closed by 10.200.16.10 port 47372 May 14 00:02:46.370734 sshd-session[4899]: pam_unix(sshd:session): session closed for user core May 14 00:02:46.376158 systemd[1]: sshd@18-10.200.8.49:22-10.200.16.10:47372.service: Deactivated successfully. May 14 00:02:46.379153 systemd[1]: session-21.scope: Deactivated successfully. May 14 00:02:46.380359 systemd-logind[1697]: Session 21 logged out. Waiting for processes to exit. May 14 00:02:46.381742 systemd-logind[1697]: Removed session 21.